FaReT: A free and open-source toolkit of three-dimensional models and software to study face perception

Abstract

A problem in the study of face perception is that results can be confounded by poor stimulus control. Ideally, experiments should precisely manipulate facial features under study and tightly control irrelevant features. Software for 3D face modeling provides such control, but there is a lack of free and open source alternatives specifically created for face perception research. Here, we provide such tools by expanding the open-source software MakeHuman. We present a database of 27 identity models and six expression pose models (sadness, anger, happiness, disgust, fear, and surprise), together with software to manipulate the models in ways that are common in the face perception literature, allowing researchers to: (1) create a sequence of renders from interpolations between two or more 3D models (differing in identity, expression, and/or pose), resulting in a “morphing” sequence; (2) create renders by extrapolation in a direction of face space, obtaining 3D “anti-faces” and caricatures; (3) obtain videos of dynamic faces from rendered images; (4) obtain average face models; (5) standardize a set of models so that they differ only in selected facial shape features, and (6) communicate with experiment software (e.g., PsychoPy) to render faces dynamically online. These tools vastly improve both the speed at which face stimuli can be produced and the level of control that researchers have over face stimuli. We validate the face model database and software tools through a small study on human perceptual judgments of stimuli produced with the toolkit.

This is a preview of subscription content, log in to check access.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

References

  1. Aguado, L., Garcia-Gutierrez, A., and Serrano-Pedraza, I. (2009). Symmetrical interaction of sex and expression in face classification tasks. Attention, Perception & Psychophysics, 71(1): 9.

    Article  Google Scholar 

  2. Anzellotti, S. and Caramazza, A. (2014). The neural mechanisms for the recognition of face identity in humans. Frontiers in Psychology, 5:672.

    PubMed  PubMed Central  Article  Google Scholar 

  3. Bastioni, M., Re, S., and Misra, S. (2008). Ideas and methods for modeling 3d human figures: the principal algorithms used by MakeHuman and their implementation in a new approach to parametric modeling. In Proceedings of the 1st Bangalore Annual Compute Conference, pages 10:1–10:6, New York. ACM.

  4. Bayet, L., Pascalis, O., Quinn, P. C., Lee, K., Gentaz, E., and Tanaka, J. W. (2015). Angry facial expressions bias gender categorization in children and adults: behavioral and computational evidence. Frontiers in Psychology, 6:346.

    PubMed  PubMed Central  Article  Google Scholar 

  5. Becker, D. V., Kenrick, D. T., Neuberg, S. L., Blackwell, K. C., and Smith, D. M. (2007). The confounded nature of angry men and happy women. Journal of Personality and Social Psychology, 92(2):179–190.

    PubMed  Article  Google Scholar 

  6. Bernstein, M. and Yovel, G. (2015). Two neural pathways of face processing: A critical evaluation of current models. Neuroscience & Biobehavioral Reviews, 55:536–546.

    Article  Google Scholar 

  7. Burton, N., Jeffery, L., Calder, A. J., and Rhodes, G. (2015). How is facial expression coded? Journal of Vision, 15(1):1–1.

    PubMed  Article  Google Scholar 

  8. Byatt, G. and Rhodes, G. (1998). Recognition of own-race and other-race caricatures: implications for models of face recognition. Vision Research, 38(15):2455–2468.

    PubMed  Article  Google Scholar 

  9. Calder, A. J., Burton, A. M., Miller, P., Young, A. W., and Akamatsu, S. (2001). A principal component analysis of facial expressions. Vision Research, 41(9):1179–1208.

    PubMed  Article  Google Scholar 

  10. Ceipidor, U. B., Medaglia, C. M., Passacantilli, E., Fabri, S., Perrone, A., and Bastioni, M. (2008). Design of a GUI for the facial expressions creation in the 3d software “Make Human”-Demo. Interaction Design and Architecture, (5-6):121–122.

  11. Cook, R., Matei, M., and Johnston, A. (2011). Exploring expression space: Adaptation to orthogonal and anti-expressions. Journal of Vision, 11(4):1–9.

    Article  Google Scholar 

  12. Dailey, M., Cottrell, G. W., and Reilly, J. (2001). California facial expressions, CAFE.

  13. Duchaine, B. and Yovel, G. (2015). A revised neural framework for face processing. Annual Review of Vision Science, 1(1):393–416.

    PubMed  Article  Google Scholar 

  14. Ebner, N. C., Riediger, M., and Lindenberger, U. (2010). FACES—A database of facial expressions in young, middle-aged, and older women and men: Development and validation. Behavior Research Methods, 42(1):351–362.

    PubMed  Article  Google Scholar 

  15. Ekman, P. (1999). Basic emotions. In Dalgleish, T. and Power, M. J., editors, Handbook of cognition and emotion., pages 45–60. John Wiley & Sons Ltd, New York, NY, US.

    Google Scholar 

  16. Ekman, P. and Friesen, W. V. (1975). Unmasking the face: A guide to recognizing emotions from facial clues. Unmasking the face: A guide to recognizing emotions from facial clues. Prentice-Hall, Oxford, England.

  17. Gilbert, M., Demarchi, S., and Urdapilleta, I. (2018). FACSHuman a software to create experimental material by modeling 3d facial expression. In Proceedings of the 18th International Conference on Intel ligent Virtual Agents - IVA ’18, pages 333–334, Sydney, NSW, Australia. ACM Press.

  18. Goeleven, E., De Raedt, R., Leyman, L., and Verschuere, B. (2008). The Karolinska directed emotional faces: a validation study. Cognition and Emotion, 22(6):1094–1118.

    Article  Google Scholar 

  19. Gosselin, F. and Schyns, P. G. (2001). Bubbles: a technique to reveal the use of information in recognition tasks. Vision Research, 41(17):2261–2271.

    PubMed  Article  Google Scholar 

  20. Ho, P. K., Woods, A., and Newell, F. N. (2018). Temporal shifts in eye gaze and facial expressions independently contribute to the perceived attractiveness of unfamiliar faces. Visual Cognition, 26(10):831–852.

    Article  Google Scholar 

  21. Kingdom, F. A. A. and Prins, N. (2016). Psychophysics: A Practical Introduction. Academic Press, Amsterdam, 2 edition.

  22. Korb, S., With, S., Niedenthal, P., Kaiser, S., and Grandjean, D. (2014). The perception and mimicry of facial movements predict judgments of smile authenticity. PLOS ONE, 9(6):e99194.

    PubMed  PubMed Central  Article  Google Scholar 

  23. Lamer, S. A., Weisbuch, M., and Sweeny, T. D. (2017). Spatial cues influence the visual perception of gender. Journal of Experimental Psychology: General, 146(9):1366–1371.

    Article  Google Scholar 

  24. Lander, K. and Butcher, N. (2015). Independence of face identity and expression processing: exploring the role of motion. Frontiers in Psychology, 6:255.

    PubMed  PubMed Central  Article  Google Scholar 

  25. Lee, K., Byatt, G., and Rhodes, G. (2000). Caricature effects, distinctiveness, and identification: Testing the face-space framework. Psychological Science, 11(5):379–385.

    PubMed  Article  Google Scholar 

  26. Leek, M. R. (2001). Adaptive procedures in psychophysical research. Perception & Psychophysics, 63(8):1279.

    Article  Google Scholar 

  27. Leopold, D. A., O’Toole, A. J., Vetter, T., and Blanz, V. (2001). Prototype-referenced shape encoding revealed by high-level aftereffects. Nat Neurosci, 4(1):89–94.

    PubMed  Article  Google Scholar 

  28. Lu, Z. L. and Dosher, B. (2013). Visual Psychophysics: From Laboratory to Theory. MIT Press.

  29. Lucey, P., Cohn, J. F., Kanade, T., Saragih, J., Ambadar, Z., and Matthews, I. (2010). The Extended Cohn-Kanade Dataset (CK+): A complete dataset for action unit and emotion-specified expression. In 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pages 94–101.

  30. Lundqvist, D., Flykt, A., and Öhman, A. (1998). The Karolinska directed emotional faces (KDEF). CD ROM from Department of Clinical Neuroscience, Psychology section, Karolinska Institutet, pages ISBN 91–630–7164–9.

  31. Ma, D. S., Correll, J., and Wittenbrink, B. (2015). The Chicago face database: A free stimulus set of faces and norming data. Behavior Research Methods, 47(4):1122–1135.

    PubMed  Article  Google Scholar 

  32. Macke, J. H. and Wichmann, F. A. (2010). Estimating predictive stimulus features from psychophysical data: The decision image technique applied to human faces. Journal of Vision, 10(5):22.

    PubMed  Article  Google Scholar 

  33. Mangini, M. C. and Biederman, I. (2004). Making the ineffable explicit: estimating the information employed for face classifications. Cognitive Science, 28(2):209–226.

    Article  Google Scholar 

  34. Mavadati, S. M., Mahoor, M. H., Bartlett, K., Trinh, P., and Cohn, J. F. (2013). DISFA: A spontaneous facial action intensity database. IEEE Transactions on A ective Computing, 4(2):151–160.

    Article  Google Scholar 

  35. O’Toole, A. J., Abdi, H., Deffenbacher, K. A., and Valentin, D. (1993). Low-dimensional representation of faces in higher dimensions of the face space. Journal of the Optical Society of America, 10(3):405–411.

    Article  Google Scholar 

  36. Oosterhof, N. N. and Todorov, A. (2008). The functional basis of face evaluation. Proceedings of the National Academy of Sciences, 105(32):11087–11092.

    Article  Google Scholar 

  37. Pandzic, I. S. and Forchheimer, R., editors (2002). MPEG-4 facial animation: The standard, implementation and applications. Wiley, Hoboken, NJ, 1 edition.

  38. Peirce, J. W. (2007). PsychoPy—Psychophysics software in Python. Journal of Neuroscience Methods, 162(1–2):8–13.

    PubMed  PubMed Central  Article  Google Scholar 

  39. Peirce, J. W. (2009). Generating stimuli for neuroscience using PsychoPy. Frontiers in Neuroinformatics, 2:10.

    PubMed  PubMed Central  Google Scholar 

  40. Rhodes, G. (2017). Adaptive coding and face recognition. Current Directions in Psychological Science, 26(3):218–224.

    Article  Google Scholar 

  41. Rhodes, G. and Jeffery, L. (2006). Adaptive norm-based coding of facial identity. Vision Research, 46(18):2977–2987.

    PubMed  Article  Google Scholar 

  42. Roesch, E. B., Tamarit, L., Reveret, L., Grandjean, D., Sander, D., and Scherer, K. R. (2011). FACSGen: A tool to synthesize emotional facial expressions through systematic manipulation of facial action units. Journal of Nonverbal Behavior, 35(1):1–16.

    Article  Google Scholar 

  43. Rozin, P. and Fallon, A. E. (1987). A perspective on disgust. Psychological Review, 94(1):23–41.

    PubMed  Article  Google Scholar 

  44. Rozin, P., Lowery, L., and Ebert, R. (1994). Varieties of disgust faces and the structure of disgust. Journal of Personality and Social Psychology, 66(5):870–881.

    PubMed  Article  Google Scholar 

  45. Russell, R. (2003). Sex, beauty, and the relative luminance of facial features. Perception, 32(9):1093–1107.

    PubMed  Article  Google Scholar 

  46. Russell, R. (2009). A sex difference in facial contrast and its exaggeration by cosmetics. Perception, 38(8):1211–1219.

    PubMed  Article  Google Scholar 

  47. Schyns, P. G., Bonnar, L., and Gosselin, F. (2002). Show me the features! Understanding recognition from the use of visual information. Psychological Science, 13(5):402–409.

    PubMed  Article  Google Scholar 

  48. Shen, Y. (2013). Comparing adaptive procedures for estimating the psychometric function for an auditory gap detection task. Attention, Perception, & Psychophysics, 75(4):771–780.

    Article  Google Scholar 

  49. Skinner, A. L. and Benton, C. P. (2010). Anti-expression aftereffects reveal prototype-referenced coding of facial expressions. Psychological Science, 21(9):1248–1253.

    PubMed  Article  Google Scholar 

  50. Soto, F. A. (2019). Categorization training changes the visual representation of face identity. Attention, Perception, & Psychophysics, 81(5):1220–1227.

    Article  Google Scholar 

  51. Soto, F. A. and Ashby, F. G. (2015). Categorization training increases the perceptual separability of novel dimensions. Cognition, 139:105–129.

    PubMed  Article  Google Scholar 

  52. Soto, F. A. and Ashby, F. G. (2019). Novel representations that support rule-based categorization are acquired on-the-fly during category learning. Psychological Research, 83(3):544–566.

    PubMed  Article  Google Scholar 

  53. Steyvers, M. (1999). Morphing techniques for manipulating face images. Behavior Research Methods, 31(2):359–369.

    Google Scholar 

  54. Strohminger, N., Gray, K., Chituc, V., Heffner, J., Schein, C., and Heagins, T. B. (2016). The MR2: A multi-racial, mega-resolution database of facial stimuli. Behavior Research Methods, 48(3):1197–1204.

    PubMed  Article  Google Scholar 

  55. Susskind, J. M., Lee, D. H., Cusi, A., Feiman, R., Grabski, W., and Anderson, A. K. (2008). Expressing fear enhances sensory acquisition. Nature Neuroscience, 11(7):843–850.

    PubMed  Article  Google Scholar 

  56. Thorstenson, C. A., Pazda, A. D., Young, S. G., and Elliot, A. J. (2019). Face color facilitates the disambiguation of confusing emotion expressions: Toward a social functional account of face color in emotion communication. Emotion, 19(5):799–807.

    PubMed  Article  Google Scholar 

  57. Treutwein, B. (1995). Adaptive psychophysical procedures. Vision Research, 35(17):2503–2522.

    PubMed  Article  Google Scholar 

  58. Turk, M. and Pentland, A. (1991). Eigenfaces for recognition. Journal of Cognitive Neuroscience, 3(1):71–86.

    PubMed  Article  Google Scholar 

  59. Uddenberg, S. and Scholl, B. J. (2018). Teleface: Serial reproduction of faces reveals a whiteward bias in race memory. Journal of Experimental Psychology: General, 147(10):1466–1487.

    Article  Google Scholar 

  60. Watson, A. and Pelli, D. G. (1983). QUEST: A Bayesian adaptive psychometric method. Perception & Psychophysics, 33(2):113–120.

    Article  Google Scholar 

  61. Watson, A. B. (2017). QUEST+: A general multidimensional Bayesian adaptive psychometric method. Journal of Vision, 17(3):10–10.

    PubMed  Article  Google Scholar 

  62. Webster, M. A. and MacLeod, D. I. A. (2011). Visual adaptation and face perception. Philosophical Transactions of the Royal Society B: Biological Sciences, 366(1571):1702–1725.

    Article  Google Scholar 

Download references

Acknowledgements

Research reported in this publication was supported by the National Institute of Mental Health of the National Institutes of Health under Award Number R21MH112013 to Fabian A. Soto. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. S.D.G.

Open Practices Statement

The data and materials for the validation studies reported here are available at https://osf.io/grp9d/. These studies were not preregistered. The FaReT toolbox described here and used to generate stimuli for the validation study can be found at https://github.com/fsotoc/FaReT, which is linked in the OSF study page as well.

Author information

Affiliations

Authors

Corresponding author

Correspondence to Fabian A. Soto.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix 1

Table 4 Parameters of the Face Identity Model

Appendix 2

Table 5 Parameters of the Expression Pose Model

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Hays, J., Wong, C. & Soto, F.A. FaReT: A free and open-source toolkit of three-dimensional models and software to study face perception. Behav Res (2020). https://doi.org/10.3758/s13428-020-01421-4

Download citation

Keywords

  • Face database
  • Face morphing
  • Face identity
  • Face expression
  • Computer-generated faces
  • Dynamic faces