Intensity dependence in high-level facial expression adaptation aftereffect
Perception of a facial expression can be altered or biased by a prolonged viewing of other facial expressions, known as the facial expression adaptation aftereffect (FEAA). Recent studies using antiexpressions have demonstrated a monotonic relation between the magnitude of the FEAA and adaptor extremity, suggesting that facial expressions are opponent coded and represented continuously from one expression to its antiexpression. However, it is unclear whether the opponent-coding scheme can account for the FEAA between two facial expressions. In the current study, we demonstrated that the magnitude of the FEAA between two facial expressions increased monotonically as a function of the intensity of adapting facial expressions, consistent with the predictions based on the opponent-coding model. Further, the monotonic increase in the FEAA occurred even when the intensity of an adapting face was too weak for its expression to be recognized. These results together suggest that multiple facial expressions are encoded and represented by balanced activity of neural populations tuned to different facial expressions.
KeywordsFacial expressions Adaptation Intensity dependence Opponent coding
Prolonged viewing of a visual stimulus induces a reduction in sensitivity to the specific stimulus, which results in a change or bias in perceptual experience for a subsequently presented stimulus. For example, an adaptation to a reddish-appearing light makes a subsequently viewed neutral light (achromatic) to appear greenish. In addition to low-level visual features such as color, orientation, and direction of motion, visual adaptation aftereffect can be observed with a high-level visual stimulus, such as a face. Indeed, the adaptation aftereffect has been reported in various aspects of facial properties including identity (Leopold, O’Toole, Vetter, & Blanz, 2001), gender (Webster, Kaping, Mizokami, & Duhamel, 2004), ethnicity (Webster et al., 2004), and facial expressions (Benton et al., 2007; Butler, Oruc, Fox, & Barton, 2008; C. J. Fox & Barton, 2007; Pell & Richards, 2013; Skinner & Benton, 2010; Webster et al., 2004). The face adaptation aftereffect (FAA) has been robustly observed even when an adapting face and a test face are different in size (Yamashita, Hardy, De Valois, & Webster, 2005) and presented at different locations (Kovács, Zimmer, Harza, Antal, & Vidnyánszky, 2005). The FAA thus seems to occur at a later stage of visual processing and is referred to as high-level adaptation (Webster & MacLeod, 2011).
The FAA is particularly interesting because it could provide insight on how faces are encoded. Using antiface and visual adaptation paradigm, it has been demonstrated that the identity of a face is encoded by a norm-based, opponent-coding mechanism (Jeffery et al., 2010; Jeffery et al., 2011; McKone, Jeffery, Boeing, Clifford, & Rhodes, 2014; Rhodes & Jeffery, 2006; Susilo, McKone, & Edwards, 2010). The two-pool norm-based opponent-coding model posits that responses of two pools of neurons tuned to opposite extremes in a given face dimension adaptively determine the norm; the FAA occurs due to a shift in the position of the norm after adaptation to a particular face. On the other hand, a recent computational simulation reveals that the FAA from antifaces can also be qualitatively and quantitatively predicted based on the exemplar-based model (Ross, Deroche, & Palmeri, 2014). The exemplar-based model argues that faces are encoded based on the location of the faces relative to exemplars of previously experienced faces, rather than relative to the norm (Lewis, 2004; Valentine, 1991). Unlike the two-pool opponent-coding model that posits that the FAA would increase monotonically as the intensity (extremity) of an adapting face increases (McKone et al., 2014; Robbins, McKone, & Edwards, 2007), the exemplar-based model predicts a nonmonotonic change in the magnitude of the FAA as a function of the adaptor extremity.
The FEAA has been observed between two facial expressions (Webster et al., 2004; Yang, Hong, & Blake, 2010). For example, adaptation to one (e.g., happy) of a pair of facial expressions (e.g., happy–angry pair) leads to a shift in the perception of an average of the two expressions toward the nonadapted facial expression (e.g., angry). Previous research with two facial expressions, however, has not demonstrated a monotonic increase in the magnitude of the FEAA with increasing intensity of an adapting face. Adaptation to a facial expression selectively affects the sensitivity to the adapted emotional expression but only has marginal, if any, influence on the processing of other expressions (Hsu & Young, 2004; Juricevic & Webster, 2012). The specificity of the FEAA suggests that different facial expressions may be processed by distinct neural populations (Hsu & Young, 2004). Thus, we hypothesize that the FEAA between two facial expressions can be accounted for by adaptive changes in neural responses of two pools of neural populations, each tuned to the two expressions. This is an extension of the opponent-coding model with two important changes. First, two pools of neural populations are tuned to two different facial expressions instead of a facial expression and its antiexpression. Second, we did not specifically assume the norm facial expression because an average expression between two facial expressions is not necessarily the same as the norm (see an example in Fig. 1b). If the FEAA between two expressions is determined by balanced neural activity between two distinct neural populations, the magnitude of the FEAA between two facial expressions would also monotonically increase as a function of the intensity of the adapting facial expression as predicted by the opponent-coding model.
In two experiments, we assessed the intensity dependence of the FEAA when the adapting and test faces were presented at the same location (Experiment 1) and when the adapting and test faces were presented at different locations (Experiment 2). One of the distinctive characteristics of the high-level adaptation aftereffect, including the FEAA, is that it does not require retinotopic locations of the adapting and test stimuli to be the same (Kovács et al., 2005; Yamashita et al., 2005). Thus, if the intensity dependence truly characterizes the FEAA, it should be observed even when the adapting and test faces appear at different locations (i.e., Experiment 2).
Antiexpressions do not correspond to obvious emotional labels (Skinner & Benton, 2010, 2012b). Thus, the FEAA from antiexpressions suggests that the FEAA may occur due to purely perceptual processes and does not require conscious recognition of expressions. Along this line, the FEAA is observed even when adapting facial expressions are rendered invisible by continuous flash suppression (Adams, Gray, Garner, & Graf, 2010; Yang et al., 2010). Thus, subtle changes in facial features that are too small to be recognized as a specific facial expression may be potent enough to induce the FEAA. To test this hypothesis, we assessed the minimum intensity of facial expressions required for the recognition of happy and angry faces and compared the minimum intensity for recognition of facial expressions and the lowest intensity that could elicit the FEAA.
Undergraduate students participated in the study in exchange for course credit. A total of 83 students participated in two experiments (Experiment 1: same location adaptation; Experiment 2: different location adaptation). Each experiment included two conditions (angry adaption and happy adaption), each with six blocks (five intensity blocks and one baseline, no-adaptation block). Each participant completed either the angry or the happy adaption condition. Participants who did not complete all blocks were excluded from all analyses (15 participants for Experiment 1 and eight participants from Experiment 2).1 As a result, data from 18 participants (14 females) for the happy adaptation condition and 18 participants (15 females) for the angry adaptation condition were analyzed in Experiment 1. In Experiment 2, data from 12 participants (five females) for the happy adaptation condition and data from 12 participants (eight females) for the angry adaptation condition were analyzed. All participants signed the informed consent form approved by the Florida Atlantic University Institutional Review Board before participating.
Apparatus and stimuli
Stimulus presentation on a Sony CPD-G520, 21-in. CRT monitor (100 Hz frame rate), and the collection of behavioral responses were controlled by the Psychophysics Toolbox (Brainard, 1997; Pelli, 1997). Stimuli were presented to participants positioned 90 cm from the CRT monitor whose luminance had been linearized from black (0.5 cd/m2) to white (70 cd/m2).
Tasks and procedure
After the completion of all six blocks, the minimum intensity of facial expressions required for the recognition of an emotion was examined. On each trial, a morphed face (happy and neutral, angry and neutral) was presented. The participants adjusted the proportion of happy (angry) and neutral until the face was recognizable as happy (angry). Two buttons were used to adjust (increase and decrease) the proportion of an emotional face, and participants ended a trial by pressing another button when they found a morphed face that they barely recognized as happy or angry. Initial faces’ intensity of expressions was randomly determined. Each of four identities was presented five times in a random order. The participants either completed the angry or the happy face version that matched the adaptation condition that they completed.
Experiment 1: Same-location adaptation
The PSEs for six adaptation blocks (a blank and five intensity levels) were subjected to a repeated-measures analysis of variance (ANOVA), separately for the angry and the happy adaption. For both emotions, effects of the intensity of adapting facial expressions were significant: for happy, F(5, 85) = 8.38, p < .001, ηp 2 = .33; for angry, F(5, 85) = 23.12, p < .001, ηp 2 = .576. For the happy adaptation, follow-up tests revealed that the PSE of the blank adaptation was significantly different from the PSEs when the adapting stimuli were 50%, F(1, 17) = 5.18, p = .036, ηp 2 = .234; 70% F(1, 17) = 19.65, p < .001, ηp 2 = .536); and 90%, F(1, 17) = 31.35, p < .001, ηp 2 = .648, happy faces. For the angry adaptation, follow-up tests revealed that all PSEs were significantly different from the PSE of the blank adaptation (smallest F value was 8.19, p = .011, ηp 2 = .325 for 10% angry face).
To test whether the shift in the PSEs linearly increased as a function of the intensity of adapting facial expressions, we calculated the linear trend L scores (Rosenthal, Rosnow, & Rubin, 2000) for each participant by multiplying the PSEs for the 10%, 30%, 50%, 70%, and 90% blocks by contrast weights of -3, -1, 0, +1, and +3. A positive L score indicates a linear increase in the PSEs and a negative L score indicates a linear decrease in the PSEs. The L scores for both the angry adaptation condition (.453) and the happy adaption condition (-.396) were significantly different from zero, t(17) = 6.32, p < .001, and t(17) = 7.00, p < .001, respectively. These results indicate that the adaptation aftereffect became stronger as a function of the intensity of adapting facial expressions, suggesting that adaptive changes in neural responses of two pools of neural populations may be responsible for the FEAA between two facial expressions.
Experiment 2: Different-location adaptation
Recent studies demonstrated that adaptation to a curved line presented at the location of a mouth could affect the perception of a subsequently presented facial expression (Dickinson & Badcock, 2013; Xu, Dayan, Lipkin, & Qian, 2008), suggesting that adaptation aftereffect can propagate along the visual processing hierarchy. Thus, the intensity dependence of the FEAA reported in Experiment 1 could have resulted from local adaptation due to both the adapting and test faces being presented at the same retinotopic location. To rule out the local-adaptation account, we assessed the adaptation aftereffect while presenting the adapting and test faces at different locations (see Fig. 3b). If the intensity dependence truly characterizes the FEAA, it should not be constrained by the retinotopic location of low-level features (e.g., a curved line of a mouth). That is, the intensity dependence should be observed when adapting and test faces are presented at different retinotopic locations.
Minimum intensity of facial expression and FEAA
To examine whether the FEAA is based on a perceptual process that does not require recognition or labeling of facial expressions, a minimum intensity of a facial expression required for recognition of facial expressions was examined. The means of the minimum intensity required for an expression to be recognized as happy was 39.76% (SD = 15.5) and as angry was 58.3% (SD = 18.12). As presented earlier, for the happy adaptation, a significant shift in the PSE was observed if the intensity of the adapting faces was at least 50%, which exceeded the minimum intensity required for a face to be recognized as happy. However, for the angry adaptation, the FEAA was observed even at the 10% intensity, which was below the minimum intensity required for a face to be perceived as angry. These results indicate that, at least for angry faces, recognition of an emotion may not be critical for the FEAA to occur.
Consistent with the prediction based on the opponent-coding model, the current findings demonstrated that the magnitude of the FEAA between two facial expressions increased monotonically as the intensity of adapting facial expressions increased. This result, thus, extends the scope of the opponent-coding model to the FEAA between two facial expressions. We also demonstrated that biases in the perception of a facial expression could be induced by adaptation to a face whose intensity of expression was too weak to be recognized as angry, suggesting that subtle changes in facial features are potent enough to cause the FEAA.
A monotonic increase in the FEAA as a function of extremity of an adapting antiexpression suggests that a single facial expression may be represented by a balanced activity between two pools of neural populations that are tuned to the opposite ends of a single facial expression dimension (Burton et al., 2015; Burton et al., 2013; Rhodes et al., 2017; Skinner & Benton, 2010, 2012a). Antiexpressions are created by morphing a facial expression (e.g., a happy face) in a linear trajectory through the overall norm face to a point opposite to the original expression (see Fig. 1a). Although antiexpressions generally do not represent any particular emotion (Sato & Yoshikawa, 2009), the two-pool opponent-coding model provides a useful scheme to understand how multiple facial expressions may be encoded and represented. Consistent with the prediction based on the opponent-coding model, the current study demonstrated a monotonic relation between adaptor extremity and the magnitude of the FEAA between happy and angry faces. Thus, these two facial expressions may also be encoded and represented by a balanced activity between two neural populations tuned to each expression. In the current study, the test faces were morphed between happy and angry faces, and the participants were instructed to indicate whether the faces were happy or angry. Due to the use of the 2-AFC paradigm, the shift away from an adapting facial expression always resulted in a response of the other facial expression. Nevertheless, the monotonic increase of the FEAA as a function of the intensity of adapting face demonstrates that the findings from previous studies using anti-expressions can be applied to two facial expressions.
Processing of facial expressions relies on both identity-dependent and identity-independent mechanisms (Campbell & Burke, 2009; C. J. Fox & Barton, 2007). The size of the FEAA is generally larger when the adapting and test faces are of the same identity than when they are different, indicating that an identity-dependent mechanism is involved in the FEAA. The fact that the FEAA is often observed even when the adapting and test faces are different identities, however, indicates that an identity-independent mechanism is also involved in the FEAA. The intensity dependence of the FEAA with antiexpressions has been observed both when the identity of adapting face is different from the test face (Skinner & Benton, 2012a) and when the identity of adapting face is the same as the test face (Burton et al., 2015; Burton et al., 2013; Skinner & Benton, 2010). Furthermore, the magnitude of the FEAA with antiexpressions is modulated simultaneously by both the intensity of adapting faces and identity of the adapting/test faces (Skinner & Benton, 2012a). Thus, both the identity-independent and identity-dependent mechanisms of facial expression processing may share a common encoding scheme based on the opponent-coding mechanism. The FEAA between two facial expressions, however, has been studied only with the same identity between the adapting and the test faces (Webster et al., 2004; Yang et al., 2010). Future study should examine the intensity dependence of the FEAA between two facial expressions when the identities of adapting and test faces are different (vs. same).
The FEAA was observed even when the intensity of an adapting facial expression was too weak to be recognized as angry, and the monotonic increase in the magnitude of the FEAA began from 10% intensity of both angry and happy expressions. These results indicate that subtle changes in facial features that are not sufficient for recognition or labeling of facial expressions are potent enough to cause the FEAA. Thus, the current results suggest that perceptual processing of facial expressions and recognition of emotions are separate constructs represented by distinct systems (Skinner & Benton, 2010). Our results are also consistent with previous studies demonstrating that the FEAA occurs without conscious recognition of adapting expressions (Adams et al., 2010; Yang et al., 2010). However, it is worth noting that interocular suppression using continuous flash suppression may impact visual signals selectively (Yang & Blake, 2012), implying that selective perceptual processing of facial expressions outside visual awareness may result in the FEAA without conscious recognition of adapting faces. Thus, it is not clear whether the FEAA without awareness of adapting faces results from selective perceptual processing of suppressed adapting faces or from adaptation to emotional information represented without awareness (Killgore & Yurgelun-Todd, 2004; Vuilleumier et al., 2002; Whalen et al., 1998).
Interestingly, the FEAA with a very low-intensity adapting facial expression occurred only with angry but not happy faces. The FEAA with happy adapting faces was observed only when the intensity of the adapting faces surpassed the minimum intensity required for a face to be recognized as happy. Although it is unclear, attention might play a role in the differences between angry and happy faces in the FEAA. Increased attention tends to enhance neural adaptation processes for faces (Rhodes et al., 2011) and for low-level visual features (Ling & Carrasco, 2006). Considering that angry faces attract more attention than happy faces (e.g., E. Fox et al., 2000; Pinkham, Griffin, Baron, Sasson, & Gur, 2010), increased attention to angry adapting faces might lower the threshold or intensity of expressions required for the FEAA. Although this speculation is consistent with previous findings that the FEAA without awareness of adapting stimuli occurs only when spatial attention is allocated to the location where an invisible adapting face is presented (Yang et al., 2010), future research should examine the role of attention in the intensity dependence of the FEAA.
In sum, the current study demonstrates that the FEAA between two faces monotonically increases as a function of the intensity of an adapting facial expression. The current result extends the scope of opponent-coding model to encoding and representation of two facial expressions. The FEAA with subtle changes in facial features further supports the perceptual nature of the FEAA. Thus, recognition or labeling of facial expressions may not be critical for the FEAA.
Participants were instructed to take a rest after completing each block as long as they wanted. As a result, about 28% participants could not complete all six blocks within a 2-hour session.
- Benton, C. P., Etchells, P. J., Porter, G., Clark, A. P., Penton-Voak, I. S., & Nikolov, S. G. (2007). Turning the other cheek: The viewpoint dependence of facial expression after-effects. Proceedings of the Royal Society B: Biological Sciences, 274(1622), 2131–2137. doi: 10.1098/rspb.2007.0473 CrossRefPubMedPubMedCentralGoogle Scholar
- Dickinson, J. E., & Badcock, D. R. (2013). On the hierarchical inheritance of aftereffects in the visual system. Frontiers in Psychology, 4. doi: 10.3389/fpsyg.2013.00472
- Jeffery, L., Rhodes, G., McKone, E., Pellicano, E., Crookes, K., & Taylor, E. (2011). Distinguishing norm-based from exemplar-based coding of identity in children: Evidence from face identity aftereffects. Journal of Experimental Psychology: Human Perception and Performance, 37(6), 1824–1840. doi: 10.1037/a0025643 PubMedGoogle Scholar
- Juricevic, I., & Webster, M. A. (2012). Selectivity of face aftereffects for expressions and anti-expressions. Frontiers in Psychology, 3. doi: 10.3389/fpsyg.2012.00004
- Lundqvist, D., Flykt, A., & Öhman, A. (1998). The Karolinska directed emotional faces (KDEF) [CD-ROM, 91–630]. Stockholm, Sweden: Department of Clinical Neuroscience, Psychology Section, Karolinska Institutet.Google Scholar
- Rhodes, G., Pond, S., Jeffery, L., Benton, C., Skinner, A., & Burton, N. (2017). Aftereffects support opponent coding of expression. Journal of Experimental Psychology: Human Perception and Performance, 43, 619–628Google Scholar
- Rosenthal, R., Rosnow, R. L., & Rubin, D. B. (2000). Contrasts and effect sizes in behavioral research: A correlational approach. Cambridge, UK: Cambridge University Press.Google Scholar
- Vuilleumier, P., Armony, J. L., Clarke, K., Husain, M., Driver, J., & Dolan, R. J. (2002). Neural response to emotional faces with and without awareness: Event-related fMRI in a parietal patient with visual extinction and spatial neglect. Neuropsychologia, 40(12), 2156–2166.CrossRefPubMedGoogle Scholar
- Whalen, P. J., Rauch, S. L., Etcoff, N. L., McInerney, S. C., Lee, M. B., & Jenike, M. A. (1998). Masked presentations of emotional facial expressions modulate amygdala activity without explicit knowledge. The Journal of Neuroscience: the Official Journal of the Society for Neuroscience, 18(1), 411–418.CrossRefGoogle Scholar