How distinct is the coding of face identity and expression? Evidence for some common dimensions in face space
Introduction
There is a long-standing debate about whether face identity and expression are processed in distinct visual pathways or whether a shared perceptual representation underlies coding of both attributes. Early models proposed that identity, which requires the coding of invariant aspects of faces, and expression, which requires the coding of changeable aspects of faces, are processed in functionally and neurally distinct visual pathways (Bruce and Young, 1986, Haxby et al., 2000, Haxby et al., 2002). These highly influential models were motivated by the existence of dissociable deficits in recognizing identity and expression, and by distinct neural correlates in visual cortical areas for these attributes.
However, others have challenged the idea of independent pathways, noting that dissociations between deficits need not arise at a perceptual level and that the selectivity of neurons and neural areas for these attributes is far from complete (for reviews see Calder, 2011, Calder and Young, 2005). For example, the Fusiform Face Area (FFA), which codes identity, and the posterior Superior Temporal Sulcus (pSTS), which codes expression, are actually sensitive to perceived changes in both attributes (Fox, Moon, Iaria, & Barton, 2009). In addition, parts of the ventral fusiform gyrus near the FFA respond rapidly (within 120 ms) to both dynamic expressions and static aspects of faces such as identity (Kawasaki et al., 2012). These rapid responses seem consistent with some shared feed-forward visual processing of identity and expression, although it is difficult to rule out feedback from post-perceptual emotion processing areas. Thus, this evidence for shared processing is equivocal.
Behavioural evidence, mostly from classification studies, also challenges the independent processing of identity and expression. Initial studies reported that changes in identity affected expression judgments, but not vice versa (Schweinberger et al., 1999, Schweinberger and Soukup, 1998). This unidirectional influence has also been reported using a visual adaptation paradigm, with changes in identity reducing expression aftereffects (Ellamil et al., 2008, Fox and Barton, 2007, Skinner and Benton, 2012), but not vice versa (Fox, Oruc, & Barton, 2008). However, when discriminability of expression and identity is well matched, both directions of influence have been reported in a variety of paradigms (e.g., Fitousi and Wenger, 2013, Ganel and Goshen-Gottstein, 2004, Wang et al., 2013, Yankouskaya et al., 2012). Assuming that these effects reflect perceptual rather than post-perceptual analysis, then they challenge the independent visual processing of identity and expression. Some support for this assumption comes from evidence that interactions occur in visual adaptation studies (e.g., Fox et al., 2008), which tap perceptual processing, and for upright but not inverted faces (which do not engage face-coding mechanisms very effectively) (Yankouskaya et al., 2012). Finally, recent work on individual differences also fails to support independent visual processing of identity and expression, with a positive correlations observed between identity and expression recognition (Palermo, O’Connor, Davis, Irons, & McKone, 2013).
Taken together these findings may suggest common, rather than distinct, visual processing of identity and expression. But what might common coding mean? One proposal, motivated by impaired holistic processing of both identity and expression in developmental prosopagnosia, is that there is a common processing stage of holistic coding for both attributes (Palermo et al., 2013, Palermo et al., 2011) (but see Calder, Young, Keane, & Dean, 2000 for evidence of independent holistic processing of identity and expression in neurotypical adults). On this view representations of identity and expression would share a common holistic format (Calder, Burton, Miller, Young, & Akamatsu, 2001). It remains unclear, however, whether the same actual representations are used for identity and expression, or whether there are distinct holistic representations for each attribute. Distinct representations are certainly possible in principle, as distinct image components are able to support accurate discrimination (using linear discriminant analysis) of identity and expression (Calder et al., 2001).
Here we ask whether there is a common perceptual representation underlying the perception of identity and expression. By a common representation, we mean one that contains dimensions used to code both identity and expression (common dimensions), as well as dimensions that are selective for identity or expression (see Fig. 22.5 in Calder, 2011). Principal Components Analysis (PCA) of face images has demonstrated that common image components (cf dimensions) can in principle support the discrimination of identity and expression (Calder, 2011, Calder et al., 2001). However, it is not yet known whether such dimensions exist in human face space.
Our first goal here is to determine whether high-level face space contains any common dimensions that code both identity and expression. If we find that it does, then a second goal is to determine whether adaptive coding of such dimensions contributes to our ability to recognize faces and their expressions. There is increasing evidence that adaptive coding of face dimensions, indexed by face aftereffects, is important for face expertise. Adaptation of identity-related dimensions is linked to identity recognition ability (Dennett et al., 2012, Rhodes et al., 2014) and adaptation of expression-related dimensions is linked to expression recognition ability (Palermo et al., 2015, Palermo et al., 2013). Therefore, if any common dimensions contribute to coding both identity and expression, then adaptation of those dimensions should be linked to our ability to recognize both attributes.
We used a novel approach that examines individual differences in perceptual aftereffects. Aftereffects are widely used to investigate visual representations and coding mechanisms for faces and other stimuli (Clifford and Rhodes, 2005, Rhodes and Leopold, 2011, Webster, 2011), and have been dubbed the psychologist’s microelectrode (Frisby, 1980). They occur when exposure (adaptation) to a stimulus alters neural processing and changes the perception of a subsequently viewed stimulus, as in the classic waterfall illusion when stationary objects appear to move upwards after viewing a downward-flowing waterfall (Mather, Verstraten, & Anstis, 1998). More generally, aftereffects reflect the adaptive updating of perceptual dimensions by experience. This updating helps to dynamically calibrate coding mechanisms to perceptual inputs, and plays an important functional role in perception (Clifford and Rhodes, 2005, Rhodes and Leopold, 2011, Webster and MacLeod, 2011).
We measured face identity and expression aftereffects in a large group of adults. If there are common dimensions that code both identity and expression, then we should find a positive association between these aftereffects, reflecting adaptation of those dimensions. Of course there could be other reasons for such an association, so we measured two other aftereffects with a view to ruling out plausible alternative accounts. We measured gaze aftereffects to test for a broader face adaptability factor, perhaps reflecting individual differences in attention to faces (Rhodes et al., 2011). We measured tilt aftereffects to test for a more general adaptability factor unrelated to face adaptation. Such a factor could reflect either genuine individual differences in perceptual plasticity or perhaps just differences in attention to adapting stimuli. If identity and expression aftereffects correlate with each other, but not with gaze or tilt aftereffects, then we could rule out differences in these other factors as the cause of the link. We used a size change between adapt and test stimuli to minimize the contribution of lower-level, retinotopic adaptation to the aftereffects.
To test whether adaptation of common dimensions is linked to our ability to recognize identity and expression, we used factor analysis to derive a factor reflecting adaptation of common identity/expression dimensions and used multiple regression to test whether this common adaptation predicts identity and expression recognition ability. If it does, then we would have evidence consistent with a functional role for adaptive coding of these common dimensions in our ability to recognize faces and their expressions. Of course, we do not expect the coding of identity and expression to be based solely on common dimensions. Dimensions that are selective for each attribute would also contribute to our ability to recognize identity and expression. To test this hypothesis, we used regression to determine whether identity and expression aftereffects contribute independently to identity and expression recognition ability, respectively.
Section snippets
Participants
The sample consisted of 355 adults, comprising 292 Caucasian (207 females; M = 20.3 years, SD = 2.3 years, range = 18–29; 85 males; M = 20.9 years, SD = 2.4 years, range = 17–29) and 63 Asian (47 females; M = 20.4 years, SD = 2.4 years, range = 17–30; 16 males; M = 20.1 years, SD = 1.7 years, range = 18–24) participants. All were undergraduate psychology students from the University of Western Australia who participated for course credit. A large sample (over 300) is recommended for the factor analyses planned (Field, 2013).
Descriptive statistics and reliability
Table 1 shows reliabilities and descriptive statistics. Reliability was acceptable for all measures, although generally lower than those reported previously. This difference may be due to the use of group testing in the present study. Only CFMT Residuals and CCMT scores were normally distributed, but skew and kurtosis were within acceptable limits for parametric analyses for all variables (Table 1) (Stuart & Kendall, 1958).
General discussion
We used individual differences in perceptual aftereffects to test whether there is a common visual representation underlying the coding of identity and expression (Calder & Young, 2005). Specifically, we asked whether face space contains dimensions that code both attributes. Our results suggest that it does. Identity and expression aftereffects were significantly positively correlated, and loaded on a single factor in a factor analysis. Moreover, adaptation of these common dimensions
Acknowledgements
This research was supported by the Australian Research Council Centre of Excellence in Cognition and its Disorders (CE110001021), an ARC Professorial Fellowship to Rhodes (DP0877379), an ARC Discovery Outstanding Researcher Award to Rhodes (DP130102300), an ARC Discovery Grant to Palermo (DP110100850) and an ARC Australian Postdoctoral Fellowship to Jason Bell (DP110101511). It was also supported by the UK Medical Research Council under project code MC-A060-5PQ50 (Andrew J. Calder). Ethical
References (64)
- et al.
Using regression to measure holistic face processing reveals a strong link with face recognition ability
Cognition
(2013) - et al.
The Cambridge Face Memory Test: Results for neurologically intact individuals and an investigation of its validity using inverted face stimuli and prosopagnosic participants
Neuropsychologia
(2006) - et al.
What is adapted in face adaptation? The neural representations of expression in the human visual system
Brain Research
(2007) - et al.
The correlates of subjective perception of identity and expression in the face network: an fMRI adaptation study
Neuroimage
(2009) - et al.
Where cognitive development and aging meet: Face learning ability peaks after age 30
Cognition
(2011) - et al.
The distributed human neural system for face perception
Trends in Cognitive Sciences
(2000) - et al.
Human neural systems for face recognition and social communication
Biological Psychiatry
(2002) - et al.
Four year-olds use norm-based coding for face identity
Cognition
(2013) - et al.
Impaired holistic coding of facial expression and facial identity in congenital prosopagnosia
Neuropsychologia
(2011) - et al.
TMS evidence for the involvement of the right occipital face area in early face processing
Current Biology
(2007)
Orientation-sensitivity of face identity aftereffects
Vision Research
Enhanced attention amplifies face adaptation
Vision Research
The neural basis of the behavioral face-inversion effect
Current Biology
Perceived gaze direction and the processing of facial displays of emotion
Psychological Science
Effects of direct and averted gaze on the perception of facially communicated emotion
Emotion
How do eye gaze and facial expression interact?
Visual Cognition
Diagnosing prosopagnosia: Effects of ageing, sex, and participant–stimulus ethnic match on the Cambridge Face Memory Test and Cambridge Face Perception Test
Cognitive Neuropsychology
Understanding face recognition
British Journal of Psychology
Nine-year-old children use norm-based coding to visually represent facial expression
Journal of Experimental Psychology: Human Perception and Performance
Does facial identity and facial expression recognition involve separate visual routes?
A principal component analysis of facial expressions
Vision Research
Configural coding of facial expressions: The impact of inversion and photographic negative
Visual Cognition
Understanding the recognition of facial identity and facial expression
Nature Reviews Neuroscience
Configural information in facial expression perception
Journal of Experimental Psychology: Human Perception and Performance
Face aftereffects predict individual differences in face recognition ability
Psychological Science
The Cambridge Car Memory Test: A task matched in format to the Cambridge Face Memory Test, with norms, reliability, sex differences, dissociations from face memory, and expertise effects
Behavior Research Methods
Examinations of identity invariance in facial expression adaptation
Cognitive, Affective and Behavioural Neuroscience
Why are you angry with me? Facial expressions of threat influence perception of gaze direction
Journal of Vision
Effective connectivity within the distributed cortical network for face perception
Cerebral Cortex
Discovering statistics using IBM SPSS Statistics
Variants of independence in the perception of facial identity and expression
Journal of Experimental Psychology: Human Perception and Performance
Cited by (42)
Neural dissociation of the acoustic and cognitive representation of voice identity
2022, NeuroImageCitation Excerpt :The data clearly support a functional role of adaptive coding in voice expertise. Our results mirrored those from the face literature (Dennett et al., 2012; Rhodes et al., 2014; Rhodes et al., 2015; Engfors et al., 2016) in which face identity aftereffects positively correlated with face memory tests but not non-face, object memory tests. While there are some reports on the relationship between BOLD signal and the behavioural performance on a given voice task in the scanner (e.g., Andics et al., 2010; Aglieri et al., 2021), the relationship of the performance on an independent voice assessment with BOLD signal changes to voice identity have not been systematically investigated.
Human face and gaze perception is highly context specific and involves bottom-up and top-down neural processing
2022, Neuroscience and Biobehavioral ReviewsCitation Excerpt :The neural networks involved in the recognition of facial emotional expressions form - like those partaking in facial identity recognition - part of the social brain (Adolphs and Birmingham, 2011; Dricu and Frühholze, 2016; Liu et al., 2021; Pessoa, 2017). The neural encoding of facial expressions and that of facial identity occurs in partially shared pathways and not in entirely distinct networks (Barraclough and Perrett., 2011; Rhodes et al., 2015). The areas that are most consistently activated during the recognition of facial emotions consist of the inferior frontal cortex, the dorsal medial frontal cortex, the STS, the fusiform gyrus, the intraparietal sulcus, the visual association areas, and the amygdala.
Serial dependence of facial identity reflects high-level face coding
2021, Vision ResearchArbitrary signals of trustworthiness – social judgments may rely on facial expressions even with experimentally manipulated valence
2019, HeliyonCitation Excerpt :The affective valence of social descriptions was transferred both to individual faces and expressions. These results may not be surprising given that – in contrary to what was assumed by early models of face perception (e.g., Bruce and Young, 1986) and empirical research (e.g., Bobes et al., 2000; Humphreys et al., 1993) – cortical areas responsible for the recognition of invariant and dynamically changing facial traits are not fully independent (Lander and Butcher, 2015; Rhodes et al., 2015). Therefore, the overlap in the neural structures of the recognition of facial expressions and static facial traits makes them likely to be affected by the same general cognitive processes.