Results for 'Speech emotion detection,'

967 found
Order:
  1.  26
    Deep learning approach to text analysis for human emotion detection from big data.Jia Guo - 2022 - Journal of Intelligent Systems 31 (1):113-126.
    Emotional recognition has arisen as an essential field of study that can expose a variety of valuable inputs. Emotion can be articulated in several means that can be seen, like speech and facial expressions, written text, and gestures. Emotion recognition in a text document is fundamentally a content-based classification issue, including notions from natural language processing (NLP) and deep learning fields. Hence, in this study, deep learning assisted semantic text analysis (DLSTA) has been proposed for human (...) detection using big data. Emotion detection from textual sources can be done utilizing notions of Natural Language Processing. Word embeddings are extensively utilized for several NLP tasks, like machine translation, sentiment analysis, and question answering. NLP techniques improve the performance of learning-based methods by incorporating the semantic and syntactic features of the text. The numerical outcomes demonstrate that the suggested method achieves an expressively superior quality of human emotion detection rate of 97.22% and the classification accuracy rate of 98.02% with different state-of-the-art methods and can be enhanced by other emotional word embeddings. (shrink)
    Direct download  
     
    Export citation  
     
    Bookmark  
  2.  6
    Detecting emotion in speech expressing incongruent emotional cues through voice and content: investigation on dominant modality and language.Mariko Kikutani & Machiko Ikemoto - 2022 - Cognition and Emotion 36 (3):492-511.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  3. Emotivity in the Voice: Prosodic, Lexical, and Cultural Appraisal of Complaining Speech.Maël Mauchand & Marc D. Pell - 2021 - Frontiers in Psychology 11.
    Emotive speech is a social act in which a speaker displays emotional signals with a specific intention; in the case of third-party complaints, this intention is to elicit empathy in the listener. The present study assessed how the emotivity of complaints was perceived in various conditions. Participants listened to short statements describing painful or neutral situations, spoken with a complaining or neutral prosody, and evaluated how complaining the speaker sounded. In addition to manipulating features of the message, social-affiliative factors (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  4.  11
    Realization of Self-Adaptive Higher Teaching Management Based Upon Expression and Speech Multimodal Emotion Recognition.Huihui Zhou & Zheng Liu - 2022 - Frontiers in Psychology 13.
    In the process of communication between people, everyone will have emotions, and different emotions will have different effects on communication. With the help of external performance information accompanied by emotional expression, such as emotional speech signals or facial expressions, people can easily communicate with each other and understand each other. Emotion recognition is an important network of affective computers and research centers for signal processing, pattern detection, artificial intelligence, and human-computer interaction. Emotions convey important information in human communication (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  5.  9
    “Motherese” Prosody in Fetal-Directed Speech: An Exploratory Study Using Automatic Social Signal Processing.Erika Parlato-Oliveira, Catherine Saint-Georges, David Cohen, Hugues Pellerin, Isabella Marques Pereira, Catherine Fouillet, Mohamed Chetouani, Marc Dommergues & Sylvie Viaux-Savelon - 2021 - Frontiers in Psychology 12.
    Introduction: Motherese, or emotional infant directed speech, is the specific form of speech used by parents to address their infants. The prosody of IDS has affective properties, expresses caregiver involvement, is a marker of caregiver-infant interaction quality. IDS prosodic characteristics can be detected with automatic analysis. We aimed to explore whether pregnant women “speak” to their unborn baby, whether they use motherese while speaking and whether anxio-depressive or obstetrical status impacts speaking to the fetus.Participants and Methods: We conducted (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  6.  16
    Towards a Framework for Acquisition and Analysis of Speeches to Identify Suspicious Contents through Machine Learning.Md Rashadur Rahman, Mohammad Shamsul Arefin, Md Billal Hossain, Mohammad Ashfak Habib & A. S. M. Kayes - 2020 - Complexity 2020:1-14.
    The most prominent form of human communication and interaction is speech. It plays an indispensable role for expressing emotions, motivating, guiding, and cheering. An ill-intentioned speech can mislead people, societies, and even a nation. A misguided speech can trigger social controversy and can result in violent activities. Every day, there are a lot of speeches being delivered around the world, which are quite impractical to inspect manually. In order to prevent any vicious action resulting from any misguided (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  7.  23
    The face-to-face light detection paradigm: A new methodology for investigating visuospatial attention across different face regions in live face-to-face communication settings.Laura A. Thompson, Daniel M. Malloy, John M. Cone & David L. Hendrickson - 2010 - Interaction Studies 11 (2):336-348.
    We introduce a novel paradigm for studying the cognitive processes used by listeners within interactive settings. This paradigm places the talker and the listener in the same physical space, creating opportunities for investigations of attention and comprehension processes taking place during interactive discourse situations. An experiment was conducted to compare results from previous research using videotaped stimuli to those obtained within the live face-to-face task paradigm. A headworn apparatus is used to briefly display LEDs on the talker's face in four (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  8.  22
    The face-to-face light detection paradigm: A new methodology for investigating visuospatial attention across different face regions in live face-to-face communication settings.Laura A. Thompson, Daniel M. Malloy, John M. Cone & David L. Hendrickson - 2010 - Interaction Studiesinteraction Studies Social Behaviour and Communication in Biological and Artificial Systems 11 (2):336-348.
    We introduce a novel paradigm for studying the cognitive processes used by listeners within interactive settings. This paradigm places the talker and the listener in the same physical space, creating opportunities for investigations of attention and comprehension processes taking place during interactive discourse situations. An experiment was conducted to compare results from previous research using videotaped stimuli to those obtained within the live face-to-face task paradigm. A headworn apparatus is used to briefly display LEDs on the talker’s face in four (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  9.  14
    The face-to-face light detection paradigm.Laura A. Thompson, Daniel M. Malloy, John M. Cone & David L. Hendrickson - 2010 - Interaction Studies. Social Behaviour and Communication in Biological and Artificial Systemsinteraction Studies / Social Behaviour and Communication in Biological and Artificial Systemsinteraction Studies 11 (2):336-348.
    We introduce a novel paradigm for studying the cognitive processes used by listeners within interactive settings. This paradigm places the talker and the listener in the same physical space, creating opportunities for investigations of attention and comprehension processes taking place during interactive discourse situations. An experiment was conducted to compare results from previous research using videotaped stimuli to those obtained within the live face-to-face task paradigm. A headworn apparatus is used to briefly display LEDs on the talker’s face in four (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  10.  24
    A review on voice pathology: Taxonomy, diagnosis, medical procedures and detection techniques, open challenges, limitations, and recommendations for future directions. [REVIEW]Mazin Abed Mohammed, Belal Al-Khateeb & Nuha Qais Abdulmajeed - 2022 - Journal of Intelligent Systems 31 (1):855-875.
    Speech is a primary means of human communication and one of the most basic features of human conduct. Voice is an important part of its subsystems. A speech disorder is a condition that affects the ability of a person to speak normally, which occasionally results in voice impairment with psychological and emotional consequences. Early detection of voice problems is a crucial factor. Computer-based procedures are less costly and easier to administer for such purposes than traditional methods. This study (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  11.  14
    Sound frequency affects speech emotion perception: results from congenital amusia.Sydney L. Lolli, Ari D. Lewenstein, Julian Basurto, Sean Winnik & Psyche Loui - 2015 - Frontiers in Psychology 6.
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  12.  8
    A composite framework for supporting user emotion detection based on intelligent taxonomy handling.Alfredo Cuzzocrea & Giovanni Pilato - 2021 - Logic Journal of the IGPL 29 (2):207-219.
    One of the most relevant issues of a social robot is its capability of catching the attention of a new acquaintance and empathize with her. The first steps towards a system which can be used by a social robot in order to be empathetic are illustrated in this paper. The system can analyze the Twitter ID of the new acquaintance, trying to detect the IAB Tier 1 categories that possibly can let arise in him/her a joyful feeling. Furthermore, it can (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  13.  6
    Dual-Task Interference on Early and Late Stages of Facial Emotion Detection Is Revealed by Human Electrophysiology.Amélie Roberge, Justin Duncan, Daniel Fiset & Benoit Brisson - 2019 - Frontiers in Human Neuroscience 13.
  14. Automatic detection of emotion from vocal expression. Devillers, L., Vidrascu, Layachi & O. - 2010 - In Klaus R. Scherer, Tanja Bänziger & Etienne Roesch (eds.), A Blueprint for Affective Computing: A Sourcebook and Manual. Oxford University Press.
     
    Export citation  
     
    Bookmark  
  15. A survey of classification techniques in speech emotion recognition.Tanmoy Roy, Tshilidzi Marwala & Snehashish Chakraverty - 2020 - In Snehashish Chakraverty (ed.), Mathematical methods in interdisciplinary sciences. Hoboken, NJ: Wiley.
     
    Export citation  
     
    Bookmark  
  16.  72
    Emotion drives attention: detecting the snake in the grass.Arne Öhman, Anders Flykt & Francisco Esteves - 2001 - Journal of Experimental Psychology: General 130 (3):466.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   217 citations  
  17. Inherent emotional quality of human speech sounds.Blake Myers-Schulz, Maia Pujara, Richard C. Wolf & Michael Koenigs - 2013 - Cognition and Emotion 27 (6):1105-1113.
    During much of the past century, it was widely believed that phonemes--the human speech sounds that constitute words--have no inherent semantic meaning, and that the relationship between a combination of phonemes (a word) and its referent is simply arbitrary. Although recent work has challenged this picture by revealing psychological associations between certain phonemes and particular semantic contents, the precise mechanisms underlying these associations have not been fully elucidated. Here we provide novel evidence that certain phonemes have an inherent, non-arbitrary (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  18. Emotional Speech Acts and the Educational Perlocutions of Speech.Renia Gasparatou - 2016 - Journal of Philosophy of Education 50 (3):319-331.
    Over the past decades, there has been an ongoing debate about whether education should aim at the cultivation of emotional wellbeing of self-esteeming personalities or whether it should prioritise literacy and the cognitive development of students. However, it might be the case that the two are not easily distinguished in educational contexts. In this paper I use J.L. Austin's original work on speech acts to emphasise the interconnection between the cognitive and emotional aspects of our utterances, and illustrate how (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  19.  28
    Detection of Genuine and Posed Facial Expressions of Emotion: Databases and Methods.Shan Jia, Shuo Wang, Chuanbo Hu, Paula J. Webster & Xin Li - 2021 - Frontiers in Psychology 11.
    Facial expressions of emotion play an important role in human social interactions. However, posed expressions of emotion are not always the same as genuine feelings. Recent research has found that facial expressions are increasingly used as a tool for understanding social interactions instead of personal emotions. Therefore, the credibility assessment of facial expressions, namely, the discrimination of genuine (spontaneous) expressions from posed (deliberate/volitional/deceptive) ones, is a crucial yet challenging task in facial expression understanding. With recent advances in computer (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  20.  26
    Emotion has no impact on attention in a change detection flicker task.Robert C. A. Bendall & Catherine Thompson - 2015 - Frontiers in Psychology 6.
  21.  39
    Heartbeat detection and the experience of emotions.Stefan Wiens, Elizabeth S. Mezzacappa & Edward S. Katkin - 2000 - Cognition and Emotion 14 (3):417-427.
  22.  4
    Detection and Recognition of Asynchronous Auditory/Visual Speech: Effects of Age, Hearing Loss, and Talker Accent.Sandra Gordon-Salant, Maya S. Schwartz, Kelsey A. Oppler & Grace H. Yeni-Komshian - 2022 - Frontiers in Psychology 12.
    This investigation examined age-related differences in auditory-visual integration as reflected on perceptual judgments of temporally misaligned AV English sentences spoken by native English and native Spanish talkers. In the detection task, it was expected that slowed auditory temporal processing of older participants, relative to younger participants, would be manifest as a shift in the range over which participants would judge asynchronous stimuli as synchronous. The older participants were also expected to exhibit greater declines in speech recognition for asynchronous AV (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  23.  14
    Rapid detection of neutral faces associated with emotional value.Akie Saito, Wataru Sato & Sakiko Yoshikawa - 2022 - Cognition and Emotion 36 (3):546-559.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  24. Emotion Recognition from speech Support for WEB Lectures.Dragos Datcu & Léon Rothkrantz - 2007 - Communication and Cognition. Monographies 40 (3-4):203-214.
     
    Export citation  
     
    Bookmark  
  25.  13
    Detecting emotions through non-invasive wearables.J. A. Rincon, V. Julian, C. Carrascosa, A. Costa & P. Novais - forthcoming - Logic Journal of the IGPL.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  26. Emotion-Related Consciousness Detection in Patients With Disorders of Consciousness Through an EEG-Based BCI System.Jiahui Pan, Qiuyou Xie, Haiyun Huang, Yanbin He, Yuping Sun, Ronghao Yu & Yuanqing Li - 2018 - Frontiers in Human Neuroscience 12.
  27.  31
    Detection of errors during speech production: a review of speech monitoring models. [REVIEW]Albert Postma - 2000 - Cognition 77 (2):97-132.
  28.  32
    Emotional speech processing: Disentangling the effects of prosody and semantic cues.Marc D. Pell, Abhishek Jaywant, Laura Monetta & Sonja A. Kotz - 2011 - Cognition and Emotion 25 (5):834-853.
  29.  23
    Emotional Connotations of Musical Instrument Timbre in Comparison With Emotional Speech Prosody: Evidence From Acoustics and Event-Related Potentials.Xiaoluan Liu, Yi Xu, Kai Alter & Jyrki Tuomainen - 2018 - Frontiers in Psychology 9.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  30.  20
    Visual signal detection as a function of sequential variability of simultaneous speech.John S. Antrobus & Jerome L. Singer - 1964 - Journal of Experimental Psychology 68 (6):603.
  31.  60
    Emotional Expressions as Speech Act Analogs.Andrea Scarantino - 2018 - Philosophy of Science 85 (5):1038-1053.
    In this article I articulate the Theory of Affective Pragmatics, which combines insights from the Basic Emotion View and the Behavioral Ecology View of emotional expressions. My core thesis is that emotional expressions are ways of manifesting one’s emotions but also of representing states of affairs, directing other people’s behaviors, and committing to future courses of actions. Since these are some of the main things we can do with language, my article’s take home message is that, from a communicative (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  32.  26
    Detecting Temporal Change in Dynamic Sounds: On the Role of Stimulus Duration, Speed, and Emotion.Annett Schirmer, Nicolas Escoffier, Xiaoqin Cheng, Yenju Feng & Trevor B. Penney - 2015 - Frontiers in Psychology 6.
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  33.  8
    Detecting structured repetition in child-surrounding speech: Evidence from maximally diverse languages.Nicholas A. Lester, Steven Moran, Aylin C. Küntay, Shanley E. M. Allen, Barbara Pfeiler & Sabine Stoll - 2022 - Cognition 221 (C):104986.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  34.  79
    Speech perception and vocal expression of emotion.Lee H. Wurm, Douglas A. Vakoch, Maureen R. Strasser, Robert Calin-Jageman & Shannon E. Ross - 2001 - Cognition and Emotion 15 (6):831-852.
  35.  24
    Emotional expression recognition and attribution bias among sexual and violent offenders: a signal detection analysis.Steven M. Gillespie, Pia Rotshtein, Rose-Marie Satherley, Anthony R. Beech & Ian J. Mitchell - 2015 - Frontiers in Psychology 6.
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  36.  25
    Hot Speech and Exploding Bombs: Autonomic Arousal During Emotion Classification of Prosodic Utterances and Affective Sounds.Rebecca Jürgens, Julia Fischer & Annekathrin Schacht - 2018 - Frontiers in Psychology 9.
    Direct download (10 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  37. Emotional Connotation in Speech Perception: Semantic Associations in the General Lexicon.Douglas A. Vakoch & Lee H. Wurm - 1997 - Cognition and Emotion 11 (4):337-349.
  38.  19
    Bangla hate speech detection on social media using attention-based recurrent neural network.Md Nur Hossain, Anik Paul, Abdullah Al Asif & Amit Kumar Das - 2021 - Journal of Intelligent Systems 30 (1):578-591.
    Hate speech has spread more rapidly through the daily use of technology and, most notably, by sharing your opinions or feelings on social media in a negative aspect. Although numerous works have been carried out in detecting hate speeches in English, German, and other languages, very few works have been carried out in the context of the Bengali language. In contrast, millions of people communicate on social media in Bengali. The few existing works that have been carried out need (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  39.  68
    Facial Expressions of Emotion: Are Angry Faces Detected More Efficiently?Elaine Fox, Victoria Lester, Riccardo Russo, R. J. Bowles, Alessio Pichler & Kevin Dutton - 2000 - Cognition and Emotion 14 (1):61-92.
  40.  5
    Emotion selectable end-to-end text-based speech editing.Tao Wang, Jiangyan Yi, Ruibo Fu, Jianhua Tao, Zhengqi Wen & Chu Yuan Zhang - 2024 - Artificial Intelligence 329 (C):104076.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  41. The emotional triangle of the promotional speech.Eric Landowski - 2007 - Semiotica 163 (1-4):59-73.
     
    Export citation  
     
    Bookmark  
  42.  18
    The effects of emotional stimuli on target detection: Indirect and direct resource costs.Ulrike Ossowski, Sanna Malinen & William S. Helton - 2011 - Consciousness and Cognition 20 (4):1649-1658.
    The present study was designed to explore the performance costs of negative emotional stimuli in a vigilance task. Forty participants performed a vigilance task in two conditions: one with task-irrelevant negative-arousing pictures and one with task-irrelevant neutral pictures. In addition to performance, we measured subjective state and frontal cerebral activity with near infrared spectroscopy. Overall performance in the negative picture condition was lower than in the neutral picture condition and the negative picture condition had elevated levels of energetic arousal, tense (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  43. Influence of lipreading on detection of speech in signal-correlated noise.Bh Repp & R. Frost - 1990 - Bulletin of the Psychonomic Society 28 (6):526-526.
     
    Export citation  
     
    Bookmark  
  44.  34
    An Experimental Phenomenological Approach to the Study of Inner Speech in Empathy: Bodily Sensations, Emotions, and Felt Knowledge as the Experiential Context of Inner Spoken Voices.Ignacio Cea, Mayte Vergara, Jorge Calderón, Alejandro Troncoso & David Martínez-Pernía - 2022 - In Ignacio Cea, Mayte Vergara, Jorge Calderón, Alejandro Troncoso & David Martínez-Pernía (eds.), New Perspectives on Inner Speech. pp. 65–80.
    The relevance of inner speech for human psychology, especially for higher-order cognitive functions, is widely recognized. However, the study of the phenomenology of inner speech, that is, what it is like for a subject to experience internally speaking his/her voice, has received much less attention. This study explores the subjective experience of inner speech through empathy for pain paradigm. To this end, an experimental phenomenological method was implemented. Sixteen healthy subjects were exposed to videos of sportswomen/sportsmen having (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  45.  26
    Sensitivity in detecting facial displays of emotion: Impact of maternal depression and oxytocin receptor genotype.Katie L. Burkhouse, Mary L. Woody, Max Owens, John E. McGeary, Valerie S. Knopik & Brandon E. Gibb - 2016 - Cognition and Emotion 30 (2):275-287.
  46. Synthesis of emotional speech. Schröder, M., Burkhardt, F., Krstulovic & S. - 2010 - In Klaus R. Scherer, Tanja Bänziger & Etienne Roesch (eds.), A Blueprint for Affective Computing: A Sourcebook and Manual. Oxford University Press.
     
    Export citation  
     
    Bookmark  
  47.  48
    The role of signal detection and amplification in the induction of emotion by music.William Forde Thompson & Max Coltheart - 2008 - Behavioral and Brain Sciences 31 (5):597-598.
    We propose that the six mechanisms identified by Juslin & Vll (J&V) fall into two categories: signal detection and amplification. Signal detection mechanisms are unmediated and induce emotion by directly detecting emotive signals in music. Amplifiers act in conjunction with signal detection mechanisms. We also draw attention to theoretical and empirical challenges associated with the proposed mechanisms.
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  48.  30
    Infants with Williams syndrome detect statistical regularities in continuous speech.Cara H. Cashon, Oh-Ryeong Ha, Katharine Graf Estes, Jenny R. Saffran & Carolyn B. Mervis - 2016 - Cognition 154 (C):165-168.
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  49.  16
    Facial expressions of emotion in speech and singing.Nicole Scotto di Carlo & Isabelle Guaitella - 2004 - Semiotica 2004 (149).
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  50.  15
    Automatic Change Detection of Emotional and Neutral Body Expressions: Evidence From Visual Mismatch Negativity.Xiaobin Ding, Jianyi Liu, Tiejun Kang, Rui Wang & Mariska E. Kret - 2019 - Frontiers in Psychology 10.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
1 — 50 / 967