Europe PMC

This website requires cookies, and the limited processing of your personal data in order to function. By using the site you are agreeing to this as outlined in our privacy notice and cookie policy.

Abstract 


In the blink of an eye, people can easily see emotion in another person's face. This fact leads many to assume that emotion perception is given and proceeds independently of conceptual processes such as language. In this paper we suggest otherwise and offer the hypothesis that language functions as a context in emotion perception. We review a variety of evidence consistent with the language-as-context view and then discuss how a linguistically relative approach to emotion perception allows for intriguing and generative questions about the extent to which language shapes the sensory processing involved in seeing emotion in another person's face.

Free full text 


Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
Trends Cogn Sci. Author manuscript; available in PMC 2008 Aug 1.
Published in final edited form as:
PMCID: PMC2225544
NIHMSID: NIHMS37844
PMID: 17625952

Language as context for the perception of emotion

Abstract

In the blink of an eye, people can easily see emotion in another person’s face. This fact leads many to assume that emotion perception is given and proceeds independently of conceptual processes such as language. In this paper we suggest otherwise and offer the hypothesis that language functions as a context in emotion perception. We review a variety of evidence consistent with the language-as-context view and then discuss how a linguistically relative approach to emotion perception allows for intriguing and generative questions about the extent to which language shapes the sensory processing involved in seeing emotion in another person’s face.

Introduction

During a speech in the winter of 2004, photographers captured a picture of Howard Dean looking enraged; this picture cost him his political party’s endorsement to run for President of the United States. Reporters who saw Dean in context noted that he seemed happily engaged with the animated, cheering crowd. Such mistakes are easy to make. The man in Figure 1a looks angry. But look again, this time at Figure 1b. You see an elated Jim Webb celebrating the 2007 electoral victory that returned control of the United States Senate to the Democratic National Party. Or consider the fact that 60%–75% of the time, people see facial portrayals of fear as ‘angry’ when the images are paired with contextual information typically associated with anger [1]. You can imagine the consequences when, in war, a soldier enters a house and sees a civilian as angry instead of fearful (or vice versa). These examples illustrate the importance of context in emotion perception. Descriptions of the social situation [2], body postures, voices, scenes [3] or other emotional faces [4] each influence how emotion is seen in the face of another person.

An external file that holds a picture, illustration, etc.
Object name is nihms37844f1.jpg

The role of context in emotion perception (Doug Mills/The New York Times/Redux). Look at United States Senator Jim Webb in (a). Taken out of context, he looks agitated and aggressive. Yet look at him again in (b). When situated, he appears happy and excited. Without context, and with only the structural information from the face as a guide, it is easy to mistake the emotion that you see in another person. A similar error in perception was said to have cost Howard Dean the opportunity to run for President of the United States in 2004.

Context refers not only to the external surroundings in which facial actions take place but also to parallel brain processes that dynamically constrain or shape how structural information from a face is processed. In this opinion piece, we focus on one such process, language, by exploring the idea that emotion words (implicitly or explicitly) serve as an internal context to constrain the meaning of a face during an instance of emotion perception.

We begin by suggesting that the psychological phenomena referred to by the English words ‘anger,’ ‘sadness,’ ‘fear,’ ‘disgust,’ ‘surprise’ and ‘happiness’ are not expressed as fixed patterns of facial behaviors, even though studies of emotion perception employ pictures of posed, highly stereotyped configurations of facial actions (or caricatures; see Box 1). In everyday life, the available structural information in a face is considerably more variable (and ambiguous) than scientists normally assume (and certainly more ambiguous than the structural information that is presented to perceivers in the typical emotion-perception experiment). We then consider psychological and neuroscience investigations that are broadly consistent with the idea that language serves as a context to reduce the ambiguity of this information, even when caricatured faces are being used as perceptual targets. Finally, we end by suggesting that the language-as-context hypothesis reframes the linguistic-relativity debate into the more interesting question of how far down into perception language can reach.

Box 1. Universal emotion expressions?

Almost every introductory psychology textbook states that certain emotions are universally recognized across cultures, and this consensus is taken as evidence that these expressions are also universally produced. Yet, people’s shared tendency to see anger in the face of Jim Webb (Figure 1a) may be produced, in part, by the methods that scientists use to study emotion perception [14]. The majority of emotion perception studies still use decontextualized, static photographs of professional or amateur actors posing caricatures (or extreme versions) of facial configurations that maximize the distinction between categories and are more readily categorized in comparison with prototypes (or the average or most common facial behaviors) (for a discussion how caricatures influence categorization, see [31]).

These caricatures are rarely seen in everyday life [32]. Perceivers report having minimal experience of caricatures of fear, disgust and surprise (and to some extent anger) over their lifetime [33]. Movie actors noted for their realism do not use these caricatured configurations to portray emotion [34].

People fail to produce these caricatures when asked to portray emotion on their faces. Congenitally blind infants [35], children [36] and adults [37] produce only a limited number of the predicted facial action units when portraying emotion, and they almost never produce an entire configuration of facial action units; but then neither do sighted people [37] (This is also the case with spontaneous facial behaviors [38].) In one recent study, 100 participants were asked to adopt facial depictions of anger, sadness, fear, surprise and happiness, and only 16% of portrayals could be identified with a high degree of agreement between raters [39].

The fact that congenitally blind individuals can produce any facial actions at all may, for some, constitute evidence for the existence of endowed affect programs, but there are alternative explanations. Many of the same facial muscle movements also occur randomly in blind individuals and appear to have no specific emotional meaning over and above an increased level of arousal [37]. Furthermore, the statistical regularities in the use of color words allow blind individuals to draw some inferences about color in the absence of sensory input [40], and presumably the same would hold true for emotion.

The great emotions debate

The ‘basic emotion’ approach

Faces appear to display emotional information for you to read, like a word on a page. If you take your alacrity in seeing anger (Figure 1a) or excitement (Figure 1b) as evidence that reading emotion in faces is natural and intrinsic, then you are in good company. The ‘basic emotion’ approach is grounded in the belief that certain emotion categories are universal biological states that are (i) triggered by dedicated, evolutionarily preserved neural circuits (or affect programs), (ii) expressed as clear and unambiguous biobehavioral signals involving configuration of facial muscle activity (or ‘facial expressions’), physiological activity, instrumental behavior (or the tendency to produce a behavior) and distinctive phenomenological experience (Figure 2), and (iii) recognized by mental machinery that is innately hardwired, reflexive and universal, so that all people everywhere (barring organic disturbance) are born in possession of five or six perceptually grounded emotion categories. (An alternative view might be that people are not born in possession of these categories but instead develop them as they inductively learn the statistical regularities in emotional responses.)

An external file that holds a picture, illustration, etc.
Object name is nihms37844f2.jpg

The natural-kind model of emotion (adapted from [2] with permission). A natural-kind model of emotion states that emotions are triggered by an event and are expressed as a recognizable signature consisting of behavioral and physiological outputs that are coordinated in time and correlated in intensity [5456]. Presumably, these patterns allow people (including scientists) to know an emotion when they see it by merely looking at the structural features of the emoter’s face.

According to the basic emotion view, “the face, as a transmitter, evolved to send expression signals that have low correlations with one another . . . the brain, as a decoder, further de-correlates these signals” [5]. The face is presumed to encode anger (or sadness, fear, disgust etc.) in a consistent and unambiguous way so that structural information on the face is sufficient for communicating a person’s emotional state. As a consequence, experimental studies of emotion perception often rely on a set of fixed, exaggerated facial portrayals of emotion that were designed for maximum discriminability (Box 1).

Heterogeneity in emotion

There has been considerable debate over the veracity of the basic emotion model since its modern incarnation in the 1960s. There is some instrument-based (facial EMG, cardiovascular and neuroimaging) evidence in support of the idea that discrete emotions have distinct biobehavioral signatures, but there is also a considerable amount that does not support this view [6]. As William James observed, not all instances that people call ‘anger’ (or ‘sadness’ or ‘fear’) look alike, feel alike, or have the same neurophysiological signature. The implication is that emotions are not events that broadcast precise information on the face, and facial behaviors, viewed in isolation will be ambiguous as to their emotional meaning. Structural information from the face is necessary, but probably not sufficient, for emotion perception.

The ‘emotion paradox’

Experience tells us, however, that people have little trouble categorizing a myriad of heterogeneous behaviors into discrete emotion categories such as happiness or sadness. Numerous studies suggest that emotion perception is categorical (although these studies have all relied on caricatured emotional faces or morphs of these faces, neither of which capture the degree of variability that actually exists in facial behaviors during emotional events) [7,8]. Taken together, the instrument- and perception-based findings frame an ‘emotion paradox’: People can automatically and effortlessly see Jim Webb as angry (Figure 1a) or elated (Figure 1b) even though sufficient information for this judgment is not unambiguously displayed on his face or in his body.

One solution to the emotion paradox is that emotion categories are nominal kinds (man-made categories that are acquired and imposed on, rather than discovered in, the world) whose conceptual content constrains the meaning of information available on the face to produce the psychological events that people call ‘anger’ or ‘elation’ [2]. Conceptual knowledge has the capacity to produce categorical perception (often via automatic labeling), even when the sensory features of the stimuli do not, on their own, warrant it [9]. Moreover, there is accumulating evidence that words ground category acquisition and function like conceptual glue for the members of a category, and this might also be true of emotion categories (Box 2). Our hypothesis: emotion words (with associated conceptual content) that become accessible serve to reduce the uncertainty that is inherent in most natural facial behaviors and constrain their meaning to allow for quick and easy perceptions of emotion.

Box 2. The power of a word

Early in the 20th century, Hunt observed that ‘the only universal element in any emotional situation is the use by all the subjects of a common word, i.e. “fear” [41]. Little did Hunt realize that a word may be enough. Words have a powerful impact on a person’s ability to group together objects or events to form a category (i.e. category acquisition), even a completely novel category [42]. When an infant is as young as 6 months, words guide categorization of animals and objects by directing the infant to focus on the obvious and inferred similarities shared by animals or objects with the same name [43,42]. Xu, Cote and Baker [44] refer to words as ‘essence placeholders’ because a word allows an infant to categorize a new object as a certain kind and to make inductive inferences about the new object on the basis of prior experiences with other objects of the same kind. On the basis of these findings, we can hypothesize that emotion words anchor and direct a child’s acquisition of emotion categories [2] and play a central role in the process of seeing a face as angry, afraid or sad, even in prelinguistic infants.

Studies of emotion perception in infants do nothing to render this hypothesis implausible. Contrary to popular belief, these studies do not conclusively demonstrate that infants distinguish between discrete emotion categories. Infants categorize faces with different perceptual features as distinct (e.g. closed vs toothy smiles) even when they belong to the same emotion category [45], and no studies can rule out the alternative explanation that infants are categorizing faces based on the valence, intensity or novelty (especially in the case of fear) of the facial configurations. For example, infants look longer at fear (or anger, or sad) caricatures after habituation to happy caricatures, but this increased looking time might reflect their ability to distinguish between faces of different valence (e.g. [46]). Similarly, infants look longer at a sad face after habituation to angry faces (or vice versa), but infants might be categorizing the faces in terms of arousal (e.g. [47], Experiment 3). Many studies find that infants tend to show biased attention to fear caricatures [e.g. 46]), but this is probably driven by the fact that infants rarely see people making these facial configurations.

No experiment to date has studied specific links between the acquisition of specific emotion words and the perception of the corresponding category in very young children, but existing studies provide some clues. General language proficiency and exposure to emotion words in conversation play a key role in helping children develop an understanding of mental states, such as emotions, and allows them to attribute emotion to other people on the basis of situational cues (e.g. [48]). Children with language impairment (but preserved cognitive, sensory and motor development) have more difficulty with emotion perception tasks [49], as do hearing-impaired children with linguistic delays (such children show reduced perceptual sensitivity to the onset of emotional expressions as measured with a morph movies task [50]). Most telling, young children (two to seven year olds) find it much easier to match a photo of a human face posing an emotion (such as in Figure 1a) to an emotion word (such as ‘anger’) than to a photo of another human face depicting the same emotion [51].

Evidence for the role of language in emotion perception

Some studies are consistent with, but not necessarily direct evidence for, the language-as-context hypothesis. For example, a recent meta-analysis of neuroimaging studies [10] found that inferior frontal gyrus (IFG), extending from the pars opercularis (Broca’s area, BA 44) through pars triangularis (BA 45) and pars orbitalis on the inferior frontal convexity (BA 47/12 l), is part of the distributed neural system that supports emotion perception. IFG is broadly implicated in a host of cognitive processes, including language [11] and the goal-related retrieval of conceptual knowledge [12]. The act of providing an emotional label to caricatured emotional faces (as opposed to a gender label) increases neural activity in right IFG and produces a corresponding decrease in amgydala response[13]. This reduction in amgydala response can be thought of as reflecting a reduced ambiguity in the meaning of the structural information from the face.

Other studies offer evidence that more directly supports the language-as-context hypothesis, even when people view caricatured portrayals. Failure to provide perceivers with a small set of emotion labels to choose from when judging caricatures (i.e. requiring participants to free label) significantly reduces ‘recognition accuracy’ [14], leading to the conclusion that emotion words (when they are offered) are constraining people’s perceptual choices. A similar effect can be observed in event-related potential (ERP) studies of emotion perception. Early ERPs resulting from structural analysis of a face (as early as 80 ms, but typically between 120 and 180 ms after stimulus onset, depending on whether the face is presented fovially or parafoveally) do not distinguish caricatured portrayals of discrete emotions from one another but instead reflect the categorization of the face as a face (vs a non-face), as generally affective (neutral vs valenced), as valenced (e.g. happy vs sad), or as portraying some degree of arousal (for reviews, see [1517]). Yet when participants explicitly categorize caricatures as ‘anger’ or ‘fear’, P1 and N170 ERPs are differentially sensitive to anger and fear faces that were incongruously paired with fear and anger body postures, suggesting that these components distinguished between the two emotion categories. Presumably, participants would need to perceive that the faces and bodies were associated with different emotion categories to see them as incongruous [18].

In addition, emotion words cause a perceptual shift in the way that faces are seen. Morphed faces depicting an equal blend of happiness and anger are encoded as angrier when those faces are paired with the word ‘angry’, and they are encoded as even angrier when participants are asked to explain why those faces are angry [19]. In addition, the pattern of neural activity associated with judging a neutral face as fearful or disgusted is similar (although not identical) to the pattern associated with looking at caricatured fearful and disgusted faces (Figure S2 in online supplementary materials for [20]).

Possibly the most direct experimental evidence for the language-as-context hypothesis comes from studies that manipulate language and look at the resulting effects on emotion perception. Verbalizing words disrupts the ability to make correct perceptual judgments about faces, presumably because it interferes with access to judgment-necessary language [21]. A temporary reduction in the accessibility of an emotion word’s meaning (via a semantic satiation procedure, Figure 3) leads to slower and less accurate perceptions of an emotion, even when participants are not required to verbally label the target faces [22].

An external file that holds a picture, illustration, etc.
Object name is nihms37844f3.jpg

Semantic-satiation paradigm. Participants in [22] performed a number of trials in which they repeated an emotion word such as ‘anger’ aloud either three times (temporarily increasing its accessibility) or 30 times (temporarily reducing its accessibility), after which they were asked to judge whether two faces matched or did not match in emotional content. Participants were slower and less accurate to correctly judge emotional faces (e.g. two anger faces) as matching when they had repeated the relevant emotion word (e.g. ‘anger’) 30 times (i.e. when the meaning of the word was made temporarily inaccessible). By examining response times and accuracy rates for various trial types, researchers were able to rule out fatigue as an alternative explanation for the observed effects (e.g. emotion perception was similarly encumbered when participants repeated an irrelevant emotion word either three or 30 times, whereas fatigue would have caused a decrease only when the word was repeated 30 times).

Implications

In this paper, we have suggested that people usually go beyond the information given on the face when perceiving emotion in another person. Emotion perception is shaped by the external context that a face inhabits and by the internal context that exists in the mind of the perceiver during an instance of perception. Language’s role in emotion perception, however unexpected, is consistent with emerging evidence of its role in color perception [23], the visualization of spatial locations [24], time perception [25] and abstract inference [26]. In our view, language is linked to conceptual knowledge about the world that is derived from prior experience and that is re-enacted during perception [27]. It may be that all context influences emotion perception via such conceptual knowledge, but that remains to be seen.

Outstanding questions remain regarding the role of language in perception of emotion (see Box 3). From our view, the language-as-context hypothesis is generative because it moves past the debate between the strong version of linguistic relativity (which is untenable) and the weak version (which some consider less interesting) into a more interesting question of process: how far down into perceptual processing does language reach?

Box 3. Outstanding questions

  1. Do emotion words anchor the conceptual system for emotion and support emotion-category acquisition in infants?

  2. How does language shape the sensory-based (bottom-up) versus. memory-based (top-down) processes supporting the perception of emotion?

  3. Does the influence of language on emotion perception vary with context or task demands?

  4. Do individual (or cultural) differences in emotion vocabulary translate into differences in structure and content of the conceptual system for emotion and into differences in emotion perception?

  5. Can emotion perception be improved by language-based training programs?

One possibility is that language has its influence at a certain stage of stimulus categorization, where memory-based conceptual knowledge about emotion is being brought to bear on an already formed percept (an existing perceptual categorization that is computed based on the structural configuration of the face). Language may help to resolve competing ‘perceptual hypotheses’ that arise from a structural analysis, particularly when other contextual information fails to do so or when such information is absent altogether.

A second, perhaps more intriguing, possibility is that language contributes to the construction of the emotional percept by dynamically reconfiguring how structural information from the face is processed. Researchers increasingly question the psychological distinctiveness of perceptual and conceptual processing (Box 4). Based on findings that conceptual processing shapes how sensory information is sampled from the physical surroundings [28], it is possible that emotion words influence how people sample and process the sensory information in a visual array (a face) to construct an emotional percept (the sensory sampling hypothesis). Based on findings that words are understood by re-activating (or re-enacting) representations of prior experience in sensorimotor cortex [27,29], it is possible that emotion words initiate the stimulation of specific sensory information previously paired with those words, and this stimulation might then contribute to how incoming sensory information from a target face is processed (the sensory inference hypothesis) [2,30].

Finally, the language-as-context hypothesis sets the stage for future research on how language influences other forms of social perception, such as the perception of gender and race. If conceptual knowledge shapes the perception of social reality and language shapes conceptual development, then language might play a much larger role in shaping our social reality, indeed the construction of our social worlds, than previously assumed.

Box 4. The perception-versus-conception distinction

Seeing (or hearing or touching) feels altogether different from thinking, and so for many years psychologists called these processes by different names:’perception’ and ‘conception,’ respectively. Although psychologists always allowed that perception and conception might influence one another, the assumption has been that they are separate but interacting parts (with no necessary causal relationship to one another) in a mind that works like a machine. Indeed, many psychological models are grounded in Descartes’ machine metaphor [52]. Slowly, however, scientists are turning to other metaphors as they discover how the brain works to instantiate the mind. In the process, the distinction between perception and conception has been all but dissolved.

We now know that conceptualization involves what are traditionally referred to as perceptual processes. Situation-specific ‘simulations’ of past sensory-motor representations ground knowledge [27,53]. We also know that perception involves conception. For example, categorization goals influence how sensory information is sampled and processed from a visual array [28]. Taken together, these findings suggest that the dichotomy between ‘perception’ and ‘conception’ is not as distinct as once thought. Instead, sensory-based and memory-based processes probably run in parallel in the brain, constraining one another as they instantiate experience.

Acknowledgments

Preparation of this paper was supported by a National Science Foundation Graduate Research Fellowship to Kristen Lindquist, by The National Institute of Mental Health (NIMH) grant K02 MH001981 and The National Institute of Aging (NIA) grant R01 AG030311 to Lisa Feldman Barrett. Many thanks to Boston College Media Technology services for their support in producing figure graphics. Thanks also to Linda Camras and Luiz Pessoa for their comments on an early draft of this manuscript and to Kevin Ochsner for discussions pertaining to the language-as-context hypothesis.

References

1. Carroll JM, Russell JA. Do facial expressions signal specific emotions? Judging emotion from the face in context. J Pers Soc Psychol. 1996;70:205–218. [Abstract] [Google Scholar]
2. Barrett LF. Solving the emotion paradox: categorization and the experience of emotion. Pers Soc Psychol Rev. 2006;10:20–46. [Abstract] [Google Scholar]
3. de Gelder B, et al. Beyond the face: exploring rapid influences of context on face processing. Prog Brain Res. 2006;155:37–48. [Abstract] [Google Scholar]
4. Russell JA, Fehr B. Relativity in the perception of emotion in facial expressions. J Exp Psychol Gen. 1987;116:223–237. [Google Scholar]
5. Smith ML, et al. Transmitting and decoding facial expressions. Psychol Sci. 2005;16:184–189. [Abstract] [Google Scholar]
6. Barrett LF. Emotions as natural kinds? Perspect Psychol Sci. 2006;1:28–58. [Abstract] [Google Scholar]
7. Dailey MN, et al. EMPATH: a neural network that categorizes facial expressions. J Cogn Neurosci. 2002;14:1158–1173. [Abstract] [Google Scholar]
8. Susskind JM, et al. Human and computer recognition of facial emotion. Neuropsychologia. 2007;45:152–162. [Abstract] [Google Scholar]
9. Goldstone RL, et al. Altering object representations through category learning. Mem Cognit. 2001;29:1051–1060. [Abstract] [Google Scholar]
10. Wager T, et al. The neuroimaging of emotion. In: Lewis M, Haviland-Jones JM, Barrett LF, editors. The Handbook of Emotion. 3. Guilford Press; in press. [Google Scholar]
11. Gitelman DR, et al. Language network specializations: an analysis with parallel task designs and functional magnetic resonance imaging. Neuroimage. 2005;26:975–985. [Abstract] [Google Scholar]
12. Badre D, et al. Dissociable controlled retrieval and generalized selection mechanisms in ventrolateral prefrontal cortex. Neuron. 2005;47:907–918. [Abstract] [Google Scholar]
13. Lieberman MD, et al. Putting feelings into words. Affect labeling disrupts amygdala activity in response to affective stimuli. Psychol Sci. 2007;18:421–428. [Abstract] [Google Scholar]
14. Russell JA. Is there universal recognition of emotion from facial expression? A review of the cross-cultural studies. Psychol Bull. 1994;115:102–141. [Abstract] [Google Scholar]
15. Eimer M, Holmes A. Event-related brain potential correlates of emotional face processing. Neuropsychologia. 2007;45:15–31. [Europe PMC free article] [Abstract] [Google Scholar]
16. Palermo R, Rhodes G. Are you always on my mind? A review of how face perception and attention interact. Neuropsychologia. 2007;45:75–92. [Abstract] [Google Scholar]
17. Vuilleumier P, Pourtois G. Distributed and interactive brain mechanisms during emotion face perception: Evidence from functional neuroimaging. Neuropsychologia. 2007;45:174–194. [Abstract] [Google Scholar]
18. Meeren HKM, et al. Rapid perceptual integration of facial expression and emotional body language. Proc Natl Acad Sci U S A. 2005;102:16518–16523. [Europe PMC free article] [Abstract] [Google Scholar]
19. Halberstadt JB, Niedenthal PM. Effects of emotion concepts on perceptual memory for emotional expressions. J Pers Soc Psychol. 2001;81:587–598. [Abstract] [Google Scholar]
20. Thielscher A, Pessoa L. Neural correlates of perceptual choice and decision making during fear-disgust discrimination. J Neurosci. 2007;27:2908–2917. [Europe PMC free article] [Abstract] [Google Scholar]
21. Roberson D, Davidoff J. The categorical perception of colors and facial expressions: the effect of verbal interference. Mem Cognit. 2000;28:977–986. [Abstract] [Google Scholar]
22. Lindquist KA, et al. Language and the perception of emotion. Emotion. 2006;6:125–138. [Abstract] [Google Scholar]
23. Davidoff J. Language and perceptual categorization. Trends Cogn Sci. 2001;5:382–387. [Abstract] [Google Scholar]
24. Levinson SC, et al. Returning the tables: language affects spatial reasoning. Cognition. 2002;84:155–188. [Abstract] [Google Scholar]
25. Boroditsky L. Does language shape thought? Mandarin and English speakers’ conceptions of time. Cognit Psychol. 2001;43:1–22. [Abstract] [Google Scholar]
26. Boroditsky L, et al. Sex, syntax and semantics. In: Gentner D, Goldin- Meadow S, editors. Language in Mind: Advances in the Study of Language and Thought. MIT Press; 2003. pp. 61–79. [Google Scholar]
27. Barsalou LW. Embodied cognition. Annu Rev Psychol. in press. [Abstract] [Google Scholar]
28. Sowden PT, Schyns PG. Channel surfing in the visual brain. Trends Cogn Sci. 2006;10:538–545. [Abstract] [Google Scholar]
29. Barsalou LW, et al. Grounding conceptual knowledge in modality-specific systems. Trends Cogn Sci. 2003;7:84–91. [Abstract] [Google Scholar]
30. Niedenthal PM. Embodying emotion. Science. 2007;316:1002–1005. [Abstract] [Google Scholar]
31. Goldstone RL, et al. Conceptual interrelatedness and caricatures. Mem Cognit. 2003;31:169–180. [Abstract] [Google Scholar]
32. Russell JA, et al. Facial and vocal expression of emotion. Annu Rev Psychol. 2003;54:329–349. [Abstract] [Google Scholar]
33. Somerville LH, Whalen PJ. Prior experience as a stimulus category confound: an example using facial expressions of emotion. Soc Cogn Affect Neurosci. 2006;1:271–274. [Europe PMC free article] [Abstract] [Google Scholar]
34. Carroll JM, Russell JA. Facial expressions in Hollywood’s portrayal of emotion. J Pers Soc Psychol. 1997;72:164–176. [Google Scholar]
35. Fraiberg S. Insights from the Blind: Comparative Studies of Blind and Sighted Infants. Basic Books 1977 [Google Scholar]
36. Roch-Levecq AC. Production of basic emotions by children with congenital blindness: Evidence for the embodiment of theory of mind. Br J Dev Psychol. 2006;24:507–528. [Google Scholar]
37. Galati D, et al. Voluntary facial expression of emotion: comparing congenitally blind with normal sighted encoders. J Pers Soc Psychol. 1997;73:1363–1379. [Abstract] [Google Scholar]
38. Galati D, et al. Judging and coding facial expression of emotions in congenitally blind children. Int J Behav Dev. 2001;25:268–278. [Google Scholar]
39. Batty M, Taylor MJ. Early processing of the six basic facial emotional expressions. Brain Res Cognit Brain Res. 2003;17:613–620. [Abstract] [Google Scholar]
40. Shepard RN, Cooper LA. Representation of colors in the blind, color-blind and normally sighted. Psychol Sci. 1992;3:97–104. [Google Scholar]
41. Hunt WA. Recent developments in the field of emotion. Psychol Bull. 1941;38:249–276. [Google Scholar]
42. Booth AE, Waxman SR. Object names and object functions serve as cues to categories in infants. Dev Psychol. 2002;38:948–957. [Abstract] [Google Scholar]
43. Fulkerson AL, et al. Linking object names and object categories: words (but not tones) facilitate object categorization in 6-and 12-month-olds. In: Bamman D, Magnitskaia T, Zaller C, editors. Supplement to the Proceedings of the 30th Boston University Conference on Language Development. Cascadilla Press; 2006. [Google Scholar]
44. Xu F, et al. Labeling guides object individuation in 12-month-old infants. Psychol Sci. 2005;16:372–377. [Abstract] [Google Scholar]
45. Caron RF, et al. Do infants see emotional expressions in static faces? Child Dev. 1985;56:1552–1560. [Abstract] [Google Scholar]
46. Bornstein MH, Arterberry ME. Recognition, discrimination and categorization of smiling by 5-month-old infants. Dev Sci. 2003;6:585–599. [Google Scholar]
47. Flom R, Bahrick LE. The development of infant discrimination in multimodal and unimodal stimulation: The role of intersensory redundancy. Dev Psychol. 2007;43:238–252. [Europe PMC free article] [Abstract] [Google Scholar]
48. de Rosnay, et al. A lag between understanding false belief and emotion attribution in young children: Relationships with linguistic ability and mother’s mental-state language. Br J Dev Psychol. 2004;22:197–218. [Google Scholar]
49. Spackman MP, et al. Understanding emotions in context: the effects of language impairment on children’s ability to infer emotional reactions. Int J Lang Commun Disord. 2006;41:173–188. [Abstract] [Google Scholar]
50. Dyck MJ, et al. Emotion recognition/understanding ability in hearing or vision- impaired children: do sights, or words make the difference? J Child Psychol Psychiatry. 2004;45:789–800. [Abstract] [Google Scholar]
51. Russell JA, Widen SC. A label superiority effect in children’s categorization of facial expressions. Soc Dev. 2002;11:30–52. [Google Scholar]
52. Barrett LF, Lindquist KA. The embodiment of emotion. In: Semin G, Smith E, editors. Embodied Grounding: Social, Cognitive, Affective, and Neuroscience Approaches. Cambridge University Press; in press. [Google Scholar]
53. Gallesse V. Embodied simulation: From neurons to phenomenal experience. Phenomenology and the Cognitive Sciences. 2005;4:23–48. [Google Scholar]
54. Ekman P. Universals and cultural differences in facial expressions of emotion. In: Cole J, editor. Nebraska Symposium on Motivation 1971. University of Nebraska Press; 1972. pp. 207–283. [Google Scholar]
55. Izard CE. The Face of Emotion. Appleton-Century-Crofts; 1971. [Google Scholar]
56. Tomkins SS. Affect, Imagery, Consciousness: Vol. I and II. The Positive Affects. Springer; 1962–3. [Google Scholar]

Citations & impact 


Impact metrics

Jump to Citations

Citations of article over time

Smart citations by scite.ai
Smart citations by scite.ai include citation statements extracted from the full text of the citing article. The number of the statements may be higher than the number of citations provided by EuropePMC if one paper cites another multiple times or lower if scite has not yet processed some of the citing articles.
Explore citation contexts and check if this article has been supported or disputed.
https://scite.ai/reports/10.1016/j.tics.2007.06.003

Supporting
Mentioning
Contrasting
39
421
0

Article citations


Go to all (211) article citations

Similar Articles 


To arrive at the top five similar articles we use a word-weighted algorithm to compare words from the Title and Abstract of each citation.

Funding 


Funders who supported this work.

NIA NIH HHS (2)

NIMH NIH HHS (2)