Skip to main content

MINI REVIEW article

Front. Psychol., 03 July 2020
Sec. Perception Science
This article is part of the Research Topic Discrimination of Genuine and Posed Facial Expressions of Emotion View all 10 articles

Recognizing Genuine From Posed Facial Expressions: Exploring the Role of Dynamic Information and Face Familiarity

  • 1Division of Neuroscience and Experimental Psychology, University of Manchester, Manchester, United Kingdom
  • 2School of Social Sciences, Humanities and Law, Teesside University, Middlesbrough, United Kingdom

The accurate recognition of emotion is important for interpersonal interaction and when navigating our social world. However, not all facial displays reflect the emotional experience currently being felt by the expresser. Indeed, faces express both genuine and posed displays of emotion. In this article, we summarize the importance of motion for the recognition of face identity before critically outlining the role of dynamic information in determining facial expressions and distinguishing between genuine and posed expressions of emotion. We propose that both dynamic information and face familiarity may modulate our ability to determine whether an expression is genuine or not. Finally, we consider the shared role for dynamic information across different face recognition tasks and the wider impact of face familiarity on determining genuine from posed expressions during real-world interactions.

Introduction

Face perception is a crucial part of social cognition, and on a daily basis, we encounter many faces. Faces convey characteristics of the viewed person, like their age, gender, emotional state, and identity. Face identity recognition is particularly important for social functioning as it enables us to identify a familiar person from an unknown individual. Previous research has revealed that factors including facial attractiveness, distinctiveness (Wiese et al., 2014), race (Meissner and Brigham, 2001), and facial motion (Lander et al., 1999) influence how well a face is recognized. Similarly, the ability to accurately determine another person’s emotional state is important for navigating day-to-day social interactions, for example, realizing whether a person is friendly or frightened, angry or sad. Previous research has shown that we use voice prosody (e.g., Wurm et al., 2001), body position (de Gelder, 2006), gait (Montepare et al., 1987), and facial expression (Adolphs, 1999) to determine emotional state.

Displayed facial expressions may reflect a genuinely felt emotion linked to an actual, remembered, or imagined event, for example, fear when scared or sad when remembering the death of a loved one. However, in some circumstances, facial expression may not reflect genuine emotion but instead be posed. Here, there may be no strong emotional experience, like smiling on cue or faking a surprised look. Alternatively, the expression displayed may mask the genuine emotion felt, like smiling when receiving a disappointing present. “Display rules” are rules learnt early in life that help determine the appropriate expression of emotion in different social contexts (Ekman and Friesen, 1969) and cultures (Matsumoto et al., 2009). Emotions may be amplified or de-amplified; they may be masked, neutralized, or simulated. Masking of emotions may be one way to recruit the help of others or otherwise gain a social advantage (Krumhuber and Manstead, 2009).

Research on facial expression processing has predominantly used static facial images taken at the expression “apex.” For example, Ekman and Friesen (1976) created a set of standardized static images of the “basic” facial expressions of happiness, sadness, fear, anger, disgust, surprise, and neutral. However, in the real world, facial expressions are dynamic in nature, rapidly changing over time. Interestingly, it is known that we are highly sensitive to dynamic information available from the face (Edwards, 1998; Dobs et al., 2014). Accordingly, sets of dynamic expressions have been developed (Amsterdam Dynamic Facial Expressions Set; ADFES; van der Schalk et al., 2011). It is important to consider the way in which expression sets are created. Typically, they are created by telling or showing the “actors” how to display prototypical expressions [based on facial action coding scheme (FACS) coding; Ekman and Friesen, 1978]. However, some research aims to capture genuine facial expressions that spontaneously occur as part of an emotional experience (see McLellan et al., 2010). Work on expression genuineness necessarily utilizes this method, with “genuine expressions” usually filmed in the lab. We return to consider the real-world application of such work, later in this article.

In this review, our overall aim is to explore the role of dynamic information in determining genuine from posed expressions. We start by outlining work investigating the recognition of face identity, highlighting the potential role for “characteristic motion signatures” (O’Toole et al., 2002). Next, we consider the role of dynamic information when recognizing facial expressions. Characteristic motion signatures may also be associated with emotional expressions and thus play a role in determining expression genuineness. Accordingly, we critically consider the difference between genuine and posed emotional expressions, in terms of the static- and dynamic-based cues available. Lastly, we consider the possible mediating effect of dynamic information and face familiarity when discriminating between genuine and posed expressions.

Movement and the Recognition of Face Identity

Research has established that dynamic information is important when determining face identity (“motion advantage”; see Schiff et al., 1986; Knight and Johnston, 1997; Lander et al., 1999). Specifically, research has found that seeing a face move aids the learning of face identity (Pike et al., 1997; Knappmeyer et al., 2003; Lander and Bruce, 2003; Pilz et al., 2006; Lander and Davies, 2007; Butcher et al., 2011), identification of familiar faces (Knight and Johnston, 1997; Lander et al., 2001), and accurate and faster face matching (Thornton and Kourtzi, 2002). Dynamic facial information seems to be a particularly useful cue to identity recognition when viewing conditions are difficult, for example, when faces are presented in photographic negative (see Knight and Johnston, 1997; in a negative image, the pattern of brightness is reversed) or blurred (Lander et al., 2001). Also, dynamic information is useful when there is perceiver impairment, such as prosopagnosia (see Steede et al., 2007; Longmore and Tree, 2013; Xiao et al., 2014; Bennetts et al., 2015).

O’Toole et al. (2002) proposed several theoretical reasons why seeing a face move may facilitate identity recognition. These theories are not mutually exclusive and the extent to which they each account for the motion advantage may depend on whether the to-be-recognized face is unfamiliar or known. For unfamiliar faces, seeing a face move may help build robust face representations via structure-from-motion processes (“representation enhancement hypothesis”). However, for familiar faces, people may learn characteristic motion patterns associated with their identity, which act as an additional cue to identity (“supplemental information hypothesis”). Finally, social cues available from the moving face may attract attention to the identity-specific areas of the face, facilitating identity processing (“social signals hypothesis”). While both the representation enhancement and supplemental information hypotheses have received empirical support (e.g., Knappmeyer et al., 2003; Butcher et al., 2011), the plausibility of the social signals hypothesis is relatively unknown, as its predictions have received little attention. To summarize, dynamic information available from a moving face may be useful for both building new face representations and accessing established ones.

Movement and the Recognition of Facial Expressions

While the motion advantage in identity recognition appears relatively robust, the effect of dynamic information on facial expression recognition is less consistent. Some research has shown that dynamic facial expressions are recognized more accurately (Cunningham and Wallraven, 2009; Trautmann et al., 2009) and rapidly (Calvo et al., 2016) than static facial expressions (see Krumhuber et al., 2013). However, other studies have found no difference between static and dynamic expression recognition (Kätsyri et al., 2008; Fiorentini and Viviani, 2011) or have only found a dynamic recognition advantage for some expressions (Fujimura and Suzuki, 2010; Recio et al., 2011).

One potential issue when comparing dynamic and static facial expression recognition is that static performance typically approaches ceiling, leaving little “room” to demonstrate any advantage. Interestingly, the usefulness of dynamic information for expression recognition is seen in studies that make recognition more difficult, through the use of point-light stimuli (Matsuzaki and Sato, 2008), subtle expressions (Ambadar et al., 2005), or by imposing time pressures (Zhongqing et al., 2014). Furthermore, Kamachi et al. (2001) found that changing the dynamic parameters of morphed expressions affected how well different expressions were recognized. As with identity recognition, dynamic facial information may support expression recognition in a flexible way, optimizing face perception when the task demands of everyday face-to-face interactions are such that static cues alone are not sufficient (Xiao et al., 2014).

In additional work supporting the distinction between recognition of moving and static expressions, Humphreys et al. (1993) report the case of an acquired prosopagnosic patient who could make expression judgments from moving (but not static) faces, consistent with the idea of at least partially dissociable static and dynamic expression processing. A number of neuroimaging studies have also investigated neural differences when viewing dynamic and static facial expressions (Kilts et al., 2003; Sato et al., 2004; Trautmann et al., 2009; Foley et al., 2012). Trautmann et al. (2009) found that dynamic faces enhanced emotion-specific brain activation patterns in the parahippocampal gyrus, including the amygdala, fusiform gyrus, superior temporal gyrus, inferior frontal gyrus, and occipital and orbitofrontal cortex. Post hoc ratings of the dynamic stimuli revealed better recognizability in comparison to the static stimuli (but see Trautmann-Lengsfeld et al., 2013). To summarize, much behavioral and neural work suggests that dynamic information can be useful in face expression recognition, particularly when recognition is difficult. However, this advantage is not unequivocally shown in the existing literature.

Movement and the Recognition of Genuine From Posed Expressions

Increasingly, researchers have become interested in the distinction between genuine and posed facial expressions. Initially, research concentrated on static happy expressions (see Frank et al., 1993; Gunnery and Ruben, 2016). Here, genuine smiles (“Duchenne” smiles) are thought to involve crinkling around the eyes (“Crows feet”) caused by activation of the orbicularis oculi muscles. Posed smiles instead involve just an upturned mouth, created by contraction of the zygomatic major muscle. More recent work has investigated expression genuineness discrimination across a range of emotions.

Accordingly, McLellan et al. (2010) found that perceivers were able to distinguish between static genuine and posed happy, sad, and fear facial expressions. They also found that participants made valence judgments to words faster after viewing a genuine valence-congruent expression (i.e., smile before a positive word) compared to a posed expression. Additional support for differences between the perception of genuine and posed expressions comes from neuroimaging work which showed different patterns of neural activation (McLellan et al., 2012). However, findings by Dawel et al. (2015) suggest that the differences between genuine and posed expressions are less apparent than previously proposed. They found that both adults and children could discriminate genuine from posed happy expressions and adults were able to discriminate sad displays. However, neither group could discriminate between genuine and posed scared facial expressions. We conclude that most research, using static pictures, suggests that people can successfully discriminate between genuine and posed facial expressions in some circumstances – but that this ability may vary by expression and individual.

It is also important to consider the role of dynamic information in determining expression genuineness. Dynamic aspects of an expression may serve as useful cues when distinguishing genuine from posed expressions (Hess and Kleck, 1994; Gunnery and Ruben, 2016). Early research proposed that genuine smiles last between 500 and 4000 ms with posed smiles being either shorter or longer than this (Ekman, 2009). In addition, genuine smiles may have a slower onset speed and longer onset duration (Schmidt et al., 2006) than posed smiles. Recent research has begun to investigate the role of dynamic information in the recognition of expression genuineness across a range of facial expressions.

Interestingly, Namba et al. (2018) asked participants to judge whether viewed facial expressions were being depicted (posed) or experienced (genuine). Expressions (amusement, surprise, disgust, and fear) were shown as dynamic or static clips. For all expressions, genuine expressions were judged more as being experienced than posed. Importantly, participants were better at differentiating between genuine and posed expressions when dynamic than static. Similarly, Zloteanu et al. (2018) found that the use of moving stimuli improved the discrimination of surprise authenticity. We note that as with static images, overall performance on dynamic expression genuineness decisions may depend on the exact task used, what emotions are considered, the participants themselves, and so on. However, cues to expression authenticity may be present in the dynamics of the facial movement.

Interdependence Between Face Familiarity and Face Movement in the Recognition of Expression Genuineness

We have already outlined research that suggests dynamic facial information is useful when determining the genuineness of facial expressions of emotion. Here, we further propose that there may be interdependence between face familiarity and face movement when determining expression genuineness.

In terms of face familiarity, it is known from neuroimaging studies that personal familiarity impacts on the response of neural systems involved in expression processing (Gobbini et al., 2004; Leibenluft et al., 2004). There is also some evidence that familiarity plays a role in the recognition of genuine emotional expressions, with performance seen to improve with familiarity (Wild-Wall et al., 2008; Huynh et al., 2010). However, other studies indicate a detrimental effect of familiarity on expression recognition in children (Herba et al., 2008) and some clinical populations (e.g., schizophrenia; Lahera et al., 2013). Thus, there is inconsistency regarding the role of familiarity on expression recognition.

Interestingly, research investigating the recognition of expression genuineness typically uses unfamiliar faces. This may be reflective of some real-life tasks, for example, in a criminal situation where the task is to determine whether an unfamiliar suspect is displaying a genuine expression or covering up a lie (Porter and ten Brinke, 2010). However, often, our interpretation of expression genuineness involves familiar people – for example, is our child genuinely happy or sarcastically smiling? Further research is needed to determine how face familiarity influences our ability to determine expression genuineness. We propose that for familiar faces, there may be additional cues that help us determine whether an expression is genuine or not, for example, a particular lop-sided smile associated with the genuine smile of a friend. Such idiosyncratic static-based cues may aid the distinction between genuine and posed smiles for this person. Thus, it is possible that face familiarity plays a mediating role in the recognition of genuine versus posed expressions, with better discrimination for familiar compared with unfamiliar faces.

It is also important to consider the possible interdependence between familiarity and dynamic information. When a face is familiar, characteristic motion patterns may act as an additional cue to identity. Indeed, the size of the motion advantage for face recognition is positively associated with face familiarity (Butcher and Lander, 2016). Such characteristic motion patterns may be linked to expressional movements. Thus, face familiarity may play a more prominent role when recognizing genuine from posed expressions using dynamic stimuli. For example, a friend may have a characteristic smile (present in the static image) but they may also have a characteristic way of smiling (dynamic characteristics). Here, cues to expression genuineness may be present in both the static- and dynamic-based parameters of a familiar person’s expression. To summarize, further work is needed to determine whether expression genuineness decisions are better for familiar than unfamiliar faces and whether this advantage is exaggerated for dynamic compared with static clips. In addition, we need to consider the interdependence between face familiarity, dynamic information, and expression genuineness.

Concluding Comments and Future Directions

The literature reviewed demonstrates that dynamic information is useful for face identification (Lander et al., 1999), expression recognition (Krumhuber et al., 2013), and for expression genuineness judgments (Namba et al., 2018). Further, we propose a possible facilitative effect of face familiarity and face movement when determining expression genuineness. It is interesting to consider what other issues remain in this research area.

First, we propose a shared role for dynamic information across different face tasks. Much facial motion contains both identity-specific and expression information which, on an everyday basis, are processed simultaneously. Work is needed to determine whether neural models of face processing can account for the shared importance of dynamic information across different face processing tasks. According to Haxby’s neural account (Haxby et al., 2000; Haxby and Gobbini, 2011), there is one cortical pathway that processes invariant aspects of faces (identity and gender; Fusiform Face Area) and another that processes changeable aspects of faces (expression and eye gaze; posterior superior temporal sulcus face area; pSTS-FA). Pitcher et al. (2014) suggest that the dynamic motor and static components of a face are processed via dissociable cortical pathways. Alternatively, Bernstein et al. (2018) suggest an integrated neural model of face processing, with dorsal face areas (pSTS-FA) sensitive to dynamic and changeable facial aspects whereas ventral areas (Occipital Face Area and Fusiform Face Area) extract form information from both invariant and changeable facial aspects. Such neural accounts need to be integrated with behavioral work to better understand the shared role of dynamic information for the different face tasks we encounter in the real world.

Second, to fully understand the task of recognizing expression genuineness, it is necessary to know what information is required for this task. Low and high spatial frequencies play different roles in the perception of facial expressions (Vuilleumier et al., 2003). Low spatial frequencies carry global/configural information whereas high spatial frequencies convey localized/fine-grain information. Low and high spatial frequencies may also play different roles in the classification of expression genuineness (Laeng et al., 2010; Kihara and Takeda, 2019). Additional work is needed to isolate which spatial frequency aspects of faces are diagnostic of expression genuineness when shown as dynamic clips.

Finally, it is important to consider the collection and use of expressions used in recognition experiments. Genuine expressions using emotion elicitation methods in the lab may lack the spontaneity of genuine expressions in the real world (Smoski and Bachorowski, 2003). The selection of genuine expressions by the experimenter may also rely on the criteria used in posed expressions. We suggest that real world expressions may be more idiosyncratic and individualist than those collected in the lab, modulated by familiarity and context. Investigation of these issues is important so that we can further consider expression genuineness and the impact of familiarity and dynamic information.

Author Contributions

All authors listed have made a substantial, direct and intellectual contribution to the work, and approved it for publication.

Funding

This work was funded from the University of Manchester Open Access fund.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

References

Adolphs, R. (1999). Social cognition and the human brain. Trends Cogn. Sci. 3, 469–479.

Google Scholar

Ambadar, Z., Schooler, J. W., and Cohn, J. F. (2005). Deciphering the enigmatic face—the importance of facial dynamics in interpreting subtle facial expressions. Psychol. Sci. 16, 403–410. doi: 10.1111/j.0956-7976.2005.01548.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Bennetts, R. J., Butcher, N., Lander, K., and Bate, S. (2015). Movement cues aid face recognition in developmental prosopagnosia. Neuropsychology 29, 855–860. doi: 10.1037/neu0000187

PubMed Abstract | CrossRef Full Text | Google Scholar

Bernstein, M., Erez, Y., Blank, I., and Yovel, G. (2018). An integrated neural framework for dynamic and static face processing. Sci. Rep. 8:7036.

Google Scholar

Butcher, N., and Lander, K. (2016). Exploring the motion advantage: evaluating the contribution of familiarity and differences in facial motion. Q. J. Exp. Psychol. 70, 919–929. doi: 10.1080/17470218.2016.1138974

PubMed Abstract | CrossRef Full Text | Google Scholar

Butcher, N., Lander, K., Fang, H., and Costen, N. (2011). The effect of motion at encoding and retrieval for same and other race face recognition. Br. J. Psychol. 102, 931–942. doi: 10.1111/j.2044-8295.2011.02060.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Calvo, M. G., Avero, P., Fernández-Martín, A., and Recio, G. (2016). Recognition thresholds for static and dynamic emotional faces. Emotion 16, 1186–1200. doi: 10.1037/emo0000192

PubMed Abstract | CrossRef Full Text | Google Scholar

Cunningham, D. W., and Wallraven, C. (2009). Dynamic information for the recognition of conversational expressions. J. Vis. 9, 7.1–7.17.

Google Scholar

Dawel, A., Palermo, R., O’Kearney, R., and McKone, E. (2015). Children can discriminate the authenticity of happy but not sad or fearful facial expressions, and use an immature intensity-only strategy. Front. Psychol. 6:462. doi: 10.3389/fpsyg.2015.00462

PubMed Abstract | CrossRef Full Text | Google Scholar

de Gelder, B. (2006). Toward a biological theory of emotional body language. Biol. Theory 1, 130–132. doi: 10.1162/biot.2006.1.2.130

PubMed Abstract | CrossRef Full Text | Google Scholar

Dobs, K., Bulthoff, I., Breidt, M., Vuong, Q. C., Curio, C., and Schultz, J. (2014). Quantifying human sensitivity to spatio-temporal information in dynamic faces. Vis. Res. 100, 78–87. doi: 10.1016/j.visres.2014.04.009

PubMed Abstract | CrossRef Full Text | Google Scholar

Edwards, K. (1998). The face of time: temporal cues in facial expression of emotion. Psychol. Sci. 9, 270–276. doi: 10.1111/1467-9280.00054

CrossRef Full Text | Google Scholar

Ekman, P. (2009). “Lie catching and microexpressions,” in The Philosophy of Deception, ed. C. Martin (Oxford: Oxford University Press), 118–133.

Google Scholar

Ekman, P., and Friesen, W. V. (1969). The repertoire of nonverbal behavior: categories, origins, usage, and coding. Semicotica 1, 49–98.

Google Scholar

Ekman, P., and Friesen, W. V. (1976). Pictures of Facial Affect. Palo Alto, CA: Consulting Psychologists Press.

Google Scholar

Ekman, P., and Friesen, W. V. (1978). Facial Action Coding System: A Technique for the Measurement of Facial Movement. Palo Alto, CA: Consulting Psychologists Press.

Google Scholar

Fiorentini, C., and Viviani, P. (2011). Is there a dynamic advantage for facial expressions? J. Vis. 11:17. doi: 10.1167/11.3.17

CrossRef Full Text | Google Scholar

Foley, E., Rippon, G., Thai, N. J., Longe, O., and Senior, C. (2012). Dynamic facial expressions evoke distinct activation in the face perception network: a connectivity analysis study. J. Cogn. Neurosci. 24, 507–520. doi: 10.1162/jocn_a_00120

CrossRef Full Text | Google Scholar

Frank, M. G., Ekman, P., and Friesen, W. V. (1993). Behavioral markers and recognizability of the smile of enjoyment. J. Pers. Soc. Psychol. 64, 83–93. doi: 10.1037/0022-3514.64.1.83

PubMed Abstract | CrossRef Full Text | Google Scholar

Fujimura, T., and Suzuki, N. (2010). Recognition of dynamic facial expressions in peripheral and central vision. Jpn. J. Psychol. 81, 348–355. doi: 10.4992/jjpsy.81.348

PubMed Abstract | CrossRef Full Text | Google Scholar

Gobbini, M. I., Leibenluft, E., Santiago, N., and Haxby, J. V. (2004). Social and emotional attachment in the neural representation of faces. Neuroimage 22, 1628–1635. doi: 10.1016/j.neuroimage.2004.03.049

PubMed Abstract | CrossRef Full Text | Google Scholar

Gunnery, S. D., and Ruben, M. A. (2016). Perceptions of Duchenne and non-Duchenne smiles: a meta-analysis. Cogn. Emot. 30, 501–515. doi: 10.1080/02699931.2015.1018817

PubMed Abstract | CrossRef Full Text | Google Scholar

Haxby, J. V., and Gobbini, M. I. (2011). Distributed neural systems for face perception. Oxford Handb. Face Percept. 6, 93–110.

Google Scholar

Haxby, J. V., Hoffman, E. A., and Gobbini, M. I. (2000). The distributed human neural system for face perception. Trends Cogn. Sci. 4, 223–233. doi: 10.1016/s1364-6613(00)01482-0

CrossRef Full Text | Google Scholar

Herba, C. M., Benson, P., Landau, S., Russell, T., Goodwin, C., Lemche, E., et al. (2008). Impact of familiarity upon children’s developing facial expression recognition. J. Child Psychol. Psychiatry 49, 201–210. doi: 10.1111/j.1469-7610.2007.01835.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Hess, U., and Kleck, R. E. (1994). The cues decoders use in attempting to differentiate emotion-elicited and posed facial expressions. Eur. J. Soc. Psychol. 24, 367–381. doi: 10.1002/ejsp.2420240306

CrossRef Full Text | Google Scholar

Humphreys, G. W., Donnelly, N., and Riddoch, M. J. (1993). Expression is computed separately from facial identity, and it is computed separately for moving and static faces: neuropsychological evidence. Neuropsychologia 31, 173–181. doi: 10.1016/0028-3932(93)90045-2

PubMed Abstract | CrossRef Full Text | Google Scholar

Huynh, C. M., Vicente, G. I., and Peissig, J. J. (2010). The effects of familiarity on genuine emotion recognition. J. Vis. 10:628. doi: 10.1167/10.7.628

CrossRef Full Text | Google Scholar

Kamachi, M., Bruce, V., Mukaida, S., Gyoba, J., Yoshikawa, S., and Akamatsu, S. (2001). Dynamic properties influence the perception of facial expressions. Perception 30, 875–887. doi: 10.1068/p3131

PubMed Abstract | CrossRef Full Text | Google Scholar

Kätsyri, J., Saalasti, S., Tiippana, K., von Wendt, L., and Sams, M. (2008). Impaired recognition of facial emotions from low-spatial frequencies in Asperger syndrome. Neuropsychologia 46, 1888–1897. doi: 10.1016/j.neuropsychologia.2008.01.005

PubMed Abstract | CrossRef Full Text | Google Scholar

Kihara, K., and Takeda, Y. (2019). The role of low-spatial frequency components in the processing of deceptive faces: a study using artificial face models. Front. Psychol. 10:1468. doi: 10.3389/fpsyg.2019.01468

PubMed Abstract | CrossRef Full Text | Google Scholar

Kilts, C. D., Egan, G., Gideon, D. A., Ely, T. D., and Hoffman, J. M. (2003). Dissociable neural pathways are involved in the recognition of emotion in static and dynamic facial expressions. NeuroImage 18, 156–168. doi: 10.1006/nimg.2002.1323

PubMed Abstract | CrossRef Full Text | Google Scholar

Knappmeyer, B., Thornton, I., and Bülthoff, H. (2003). The use of facial motion and facial form during the processing of identity. Vis. Res. 43, 1921–1936. doi: 10.1016/s0042-6989(03)00236-0

CrossRef Full Text | Google Scholar

Knight, B., and Johnston, A. (1997). The role of movement in face recognition. Vis. Cogn. 4, 265–273. doi: 10.1080/713756764

CrossRef Full Text | Google Scholar

Krumhuber, E. G., Kappas, A., and Manstead, A. S. R. (2013). Effects of dynamic aspects of facial expressions: a review. Emot. Rev. 5, 41–46. doi: 10.1177/1754073912451349

CrossRef Full Text | Google Scholar

Krumhuber, E. G., and Manstead, A. S. R. (2009). Can Duchenne smiles be feigned? New evidence on felt and false smiles. Emotion 9, 807–820. doi: 10.1037/a0017844

PubMed Abstract | CrossRef Full Text | Google Scholar

Laeng, B., Profeti, I., Saether, L., Adolfsdottir, S., Lundervold, A. J., Vangberg, T., et al. (2010). Invisible expressions evoke core impressions. Emotion 10, 573–586. doi: 10.1037/a0018689

PubMed Abstract | CrossRef Full Text | Google Scholar

Lahera, G., Herrera, S., Fernández, C., Bardón, M., de los Ángeles, V., and Fernández-Liria, A. (2013). Familiarity and face emotion recognition in patients with schizophrenia. Comp. Psychiatry 55, 199–205. doi: 10.1016/j.comppsych.2013.06.006

PubMed Abstract | CrossRef Full Text | Google Scholar

Lander, K., and Bruce, V. (2003). The role of motion in learning new faces. Vis. Cogn. 10, 897–912. doi: 10.1080/13506280344000149

CrossRef Full Text | Google Scholar

Lander, K., Bruce, V., and Hill, H. (2001). Evaluating the effectiveness of pixelation and blurring on masking the identity of familiar faces. Appl. Cogn. Psychol. 15, 101–116. doi: 10.1002/1099-0720(200101/02)15:1<101::aid-acp697>3.0.co;2-7

CrossRef Full Text | Google Scholar

Lander, K., Christie, F., and Bruce, V. (1999). The role of movement in the recognition of famous faces. Mem. Cogn. 27, 974–985. doi: 10.3758/bf03201228

PubMed Abstract | CrossRef Full Text | Google Scholar

Lander, K., and Davies, R. (2007). Exploring the role of characteristic motion when learning new faces. Q. J. Exp. Psychol. 60, 519–526. doi: 10.1080/17470210601117559

PubMed Abstract | CrossRef Full Text | Google Scholar

Leibenluft, E., Gobbini, M. I., Harrison, T., and Haxby, J. V. (2004). Mothers’ neural activation in response to pictures of their, and other, children. Biol. Psychiatry 56, 225–232. doi: 10.1016/j.biopsych.2004.05.017

PubMed Abstract | CrossRef Full Text | Google Scholar

Longmore, C., and Tree, J. (2013). Motion as a cue to face recognition: evidence from congenital prosopagnosia. Neuropsychologia 51, 864–875. doi: 10.1016/j.neuropsychologia.2013.01.022

PubMed Abstract | CrossRef Full Text | Google Scholar

Matsumoto, D., Willingham, B., and Olide, A. (2009). Sequential dynamics of culturally moderated facial expressions of emotion. Psychol. Sci. 20, 1269–1274. doi: 10.1111/j.1467-9280.2009.02438.x

PubMed Abstract | CrossRef Full Text | Google Scholar

Matsuzaki, N., and Sato, T. (2008). The perception of facial expression from two-frame apparent motion. Perception 37:1560. doi: 10.1068/p5769

PubMed Abstract | CrossRef Full Text | Google Scholar

McLellan, T., Johnston, L., Dalrymple-Alford, J., and Porter, R. (2010). Sensitivity to genuine versus posed emotion specified in facial displays. Cogn. Emot. 24, 1277–1292. doi: 10.1080/02699930903306181

CrossRef Full Text | Google Scholar

McLellan, T. L., Wilcke, J. C., Johnston, L., Watts, R., and Miles, L. K. (2012). Sensitivity to posed and genuine displays of happiness and sadness: a fMRI study. Neurosci. Lett. 531, 149–154. doi: 10.1016/j.neulet.2012.10.039

PubMed Abstract | CrossRef Full Text | Google Scholar

Meissner, C. A., and Brigham, J. C. (2001). Thirty years of investigating the own-race bias in memory for faces: a meta-analytic review. Psychol. Public Policy Law 7, 3–35. doi: 10.1037//1076-8971.7.1.3

CrossRef Full Text | Google Scholar

Montepare, J., Goldstein, S., and Clausen, A. (1987). The identification of emotions from gait information. J. Nonverb. Behav. 11, 33–42. doi: 10.1007/bf00999605

CrossRef Full Text | Google Scholar

Namba, S., Kabir, R. S., Miyatani, M., and Nakao, T. (2018). Dynamic displays enhance the ability to discriminate genuine and posed facial expressions of emotion. Front. Psychol. 9:672. doi: 10.3389/fpsyg.2018.00672

PubMed Abstract | CrossRef Full Text | Google Scholar

O’Toole, A. J., Roark, D. A., and Abdi, H. (2002). Recognizing moving faces: a psychological and neural synthesis. Trends Cogn. Sci. 6, 261–266. doi: 10.1016/s1364-6613(02)01908-3

CrossRef Full Text | Google Scholar

Pike, G. E., Kemp, R. I., Towell, N. A., and Phillips, K. C. (1997). Recognizing moving faces: the relative contribution of motion and perspective view information. Vis. Cogn. 4, 409–437.

Google Scholar

Pilz, K. S., Thornton, I. M., and Bülthoff, H. H. (2006). A search advantage for faces learned in motion. Exp. Brain Res. 171, 436–447. doi: 10.1007/s00221-005-0283-8

PubMed Abstract | CrossRef Full Text | Google Scholar

Pitcher, D., Duchaine, B., and Walsh, V. (2014). Combined TMS and fMRI reveal dissociable cortical pathways for dynamic and static face perception. Curr. Biol. 24, 2066–2070. doi: 10.1016/j.cub.2014.07.060

PubMed Abstract | CrossRef Full Text | Google Scholar

Porter, S., and ten Brinke, L. (2010). The truth about lies: what works in detecting high-stakes deception? Legal Criminol. Psychol. 15, 57–75. doi: 10.1348/135532509x433151

PubMed Abstract | CrossRef Full Text | Google Scholar

Recio, G., Sommer, W., and Schacht, A. (2011). Electrophysiological correlates of perceiving and evaluating static and dynamic facial emotional expressions. Brain Res. 1376, 66–75. doi: 10.1016/j.brainres.2010.12.041

PubMed Abstract | CrossRef Full Text | Google Scholar

Sato, W., Kochiyama, T., Yoshikawa, S., Naito, E., and Matsumura, M. (2004). Enhanced neural activity in response to dynamic facial expressions of emotion: an fMRI study. Cogn. Brain Res. 20, 81–91. doi: 10.1016/j.cogbrainres.2004.01.008

PubMed Abstract | CrossRef Full Text | Google Scholar

Schiff, W., Banka, L., and Galdi, G. D. (1986). Recognizing people seen in events via dynamic “mug shots”. Am. J. Psychol. 99, 219–231.

Google Scholar

Schmidt, K. L., Ambadar, Z., Cohn, J. F., and Reed, L. I. (2006). Movement differences between deliberate and spontaneous facial expressions: zygomaticus major action in smiling. J. Nonverb. Behav. 30, 37–52. doi: 10.1007/s10919-005-0003-x

PubMed Abstract | CrossRef Full Text | Google Scholar

Smoski, M. J., and Bachorowski, J. A. (2003). Antiphonal laughter between friends and strangers. Cogn. Emot. 17, 327–340. doi: 10.1080/02699930302296

PubMed Abstract | CrossRef Full Text | Google Scholar

Steede, L., Tree, J., and Hole, G. (2007). Dissociating mechanisms involved in accessing identity by dynamic and static cues. Vis. Cogn. 15, 116–119.

Google Scholar

Thornton, I. M., and Kourtzi, Z. (2002). A matching advantage for dynamic human faces. Perception 31, 113–132. doi: 10.1068/p3300

PubMed Abstract | CrossRef Full Text | Google Scholar

Trautmann, S. A., Fehr, T., and Herrmann, M. (2009). Emotions in motion: dynamic compared to static facial expressions of disgust and happiness reveal more widespread emotion-specific activations. Brain Res. 1284, 100–115. doi: 10.1016/j.brainres.2009.05.075

PubMed Abstract | CrossRef Full Text | Google Scholar

Trautmann-Lengsfeld, S. A., Dominguez-Vorras, J., Escera, C., Herrmann, M., and Fehr, T. (2013). The perception of dynamic and static facial expressions of happiness and disgust investigated by ERPs and fMRI constrained source analysis. PLoS One 8:e66997. doi: 10.1371/journal.pone.0066997

PubMed Abstract | CrossRef Full Text | Google Scholar

van der Schalk, J., Hawk, S. T., Fischer, A. H., and Doosje, B. (2011). Moving faces, looking places: validation of the Amsterdam dynamic facial expression set (ADFES). Emotion 11, 907–920. doi: 10.1037/a0023853

PubMed Abstract | CrossRef Full Text | Google Scholar

Vuilleumier, P., Armony, J., Driver, J., and Dolan, R. J. (2003). Distinct spatial frequency sensitivities for processing faces and emotional expressions. Nat. Neurosci. 6, 624–631. doi: 10.1038/nn1057

PubMed Abstract | CrossRef Full Text | Google Scholar

Wiese, H., Altmann, C. S., and Schweinberger, S. R. (2014). Effects of attractiveness on face memory separated from distinctiveness: evidence from event-related brain potentials. Neuropsychologia 56, 26–36. doi: 10.1016/j.neuropsychologia.2013.12.023

PubMed Abstract | CrossRef Full Text | Google Scholar

Wild-Wall, N., Dimigen, O., and Sommer, W. (2008). Interaction of facial expressions and familiarity: ERP evidence. Biol. Psychol. 77, 138–149. doi: 10.1016/j.biopsycho.2007.10.001

PubMed Abstract | CrossRef Full Text | Google Scholar

Wurm, L. H., Vakoch, D. A., Strasser, M. R., Calin-Jageman, R., and Ross, S. E. (2001). Speech perception and vocal expression of emotion. Cogn. Emot. 15, 831–852. doi: 10.1080/02699930143000086

CrossRef Full Text | Google Scholar

Xiao, N. G., Perrotta, S., Quinn, P. C., Wang, Z., Sun, Y. H. P., and Lee, K. (2014). On the facilitative effects of face motion on face recognition and its development. Front. Psychol. Emot. Sci. 5:633. doi: 10.3389/fpsyg.2014.00633

PubMed Abstract | CrossRef Full Text | Google Scholar

Zhongqing, J., Wenhui, L., Recio, G., Ying, L., Wenbo, L., Doufei, Z., et al. (2014). Pressure inhibits dynamic advantage in the classification of facial expressions of emotion. PLoS One 9:e100162. doi: 10.1371/journal.pone.0100162

PubMed Abstract | CrossRef Full Text | Google Scholar

Zloteanu, M., Krumhuber, E. G., and Richardson, D. C. (2018). Detecting genuine and deliberate displays of surprise in static and dynamic faces. Front. Psychol. 9:1184. doi: 10.3389/fpsyg.2018.01184

PubMed Abstract | CrossRef Full Text | Google Scholar

Keywords: expression recognition, genuine and posed, dynamic information, face familiarity, face recognition

Citation: Lander K and Butcher NL (2020) Recognizing Genuine From Posed Facial Expressions: Exploring the Role of Dynamic Information and Face Familiarity. Front. Psychol. 11:1378. doi: 10.3389/fpsyg.2020.01378

Received: 05 February 2020; Accepted: 22 May 2020;
Published: 03 July 2020.

Edited by:

Shuo Wang, West Virginia University, United States

Reviewed by:

Pedro Guerra, University of Granada, Spain
Harold Hill, University of Wollongong, Australia

Copyright © 2020 Lander and Butcher. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Karen Lander, karen.lander@manchester.ac.uk

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.