Advertisement

OpenFACS: An Open Source FACS-Based 3D Face Animation System

  • Vittorio CuculoEmail author
  • Alessandro D’Amelio
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11902)

Abstract

We present OpenFACS, an open source FACS-based 3D face animation system. OpenFACS is a software that allows the simulation of realistic facial expressions through the manipulation of specific action units as defined in the Facial Action Coding System. OpenFACS has been developed together with an API which is suitable to generate real-time dynamic facial expressions for a three-dimensional character. It can be easily embedded in existing systems without any prior experience in computer graphics. In this note, we discuss the adopted face model, the implemented architecture and provide additional details of model dynamics. Finally, a validation experiment is proposed to assess the effectiveness of the model.

Keywords

Facial expression FACS Emotion 3D facial animation HCI 

Notes

Acknowledgements

We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Quadro P6000 GPU used for this research.

References

  1. 1.
    Amini, R., Lisetti, C., Ruiz, G.: HapFACS 3.0: facs-based facial expression generator for 3D speaking virtual characters. IEEE Trans. Affect. Comput. 6(4), 348–360 (2015).  https://doi.org/10.1109/TAFFC.2015.2432794
  2. 2.
    Baltrušaitis, T., Mahmoud, M., Robinson, P.: Cross-dataset learning and person-specific normalisation for automatic action unit detection. In: 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition, vol. 6, pp. 1–6. IEEE (2015)Google Scholar
  3. 3.
    Bee, N., Falk, B., André, E.: Simplified facial animation control utilizing novel input devices: a comparative study. In: Proceedings of the 14th International Conference on Intelligent User Interfaces, pp. 197–206. ACM (2009)Google Scholar
  4. 4.
    Boccignone, G., Bodini, M., Cuculo, V., Grossi, G.: Predictive sampling of facial expression dynamics driven by a latent action space. In: Proceedings of the 14th International Conference on Signal-Image Technology Internet-Based Systems (SITIS), Las Palmas de Gran Canaria, Spain, pp. 26–29 (2018)Google Scholar
  5. 5.
    Boccignone, G., Conte, D., Cuculo, V., D’Amelio, A., Grossi, G., Lanzarotti, R.: Deep construction of an affective latent space via multimodal enactment. IEEE Trans. Cogn. Dev. Syst. 10(4), 865–880 (2018).  https://doi.org/10.1109/TCDS.2017.2788820CrossRefGoogle Scholar
  6. 6.
    Ceruti, C., Cuculo, V., D’Amelio, A., Grossi, G., Lanzarotti, R.: Taking the hidden route: deep mapping of affect via 3D neural networks. In: Battiato, S., Farinella, G.M., Leo, M., Gallo, G. (eds.) ICIAP 2017. LNCS, vol. 10590, pp. 189–196. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-70742-6_18CrossRefGoogle Scholar
  7. 7.
    Cuculo, V., Lanzarotti, R., Boccignone, G.: The color of smiling: computational synaesthesia of facial expressions. In: Murino, V., Puppo, E. (eds.) ICIAP 2015. LNCS, vol. 9279, pp. 203–214. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-23231-7_19CrossRefGoogle Scholar
  8. 8.
    D’Amelio, A., Cuculo, V., Grossi, G., Lanzarotti, R., Lin, J.: A note on modelling a somatic motor space for affective facial expressions. In: Battiato, S., Farinella, G.M., Leo, M., Gallo, G. (eds.) ICIAP 2017. LNCS, vol. 10590, pp. 181–188. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-70742-6_17CrossRefGoogle Scholar
  9. 9.
    Darwin, C.: The Expression of the Emotions in Man and Animals. John Murray (1872)Google Scholar
  10. 10.
    Ekman, P.: Facial action coding system (FACS). A human face (2002)Google Scholar
  11. 11.
    Ekman, P., Friesen, W.V.: Constants across cultures in the face and emotion. J. Pers. Soc. Psychol. 17(2), 124 (1971)CrossRefGoogle Scholar
  12. 12.
    Gilbert, M., Demarchi, S., Urdapilleta, I.: FACSHuman a software to create experimental material by modeling 3D facial expression. In: Proceedings of the 18th International Conference on Intelligent Virtual Agents - IVA 2018, pp. 333–334. ACM Press, New York (2018)Google Scholar
  13. 13.
    Karson, C.N., Berman, K.F., Donnelly, E.F., Mendelson, W.B., Kleinman, J.E., Wyatt, R.J.: Speaking, thinking, and blinking. Psychiatry Res. 5(3), 243–246 (1981)CrossRefGoogle Scholar
  14. 14.
    Klehm, O., et al.: Recent advances in facial appearance capture. In: Computer Graphics Forum, pp. 709–733. Wiley Online Library (2015)Google Scholar
  15. 15.
    Krumhuber, E.G., Tamarit, L., Roesch, E.B., Scherer, K.R.: FACSGen 2.0 animation software: generating three-dimensional FACS-valid facial expressions for emotion research. Emotion 12(2), 351 (2012)Google Scholar
  16. 16.
    MacDorman, K.F., Ishiguro, H.: The uncanny advantage of using androids in cognitive and social science research. Interact. Stud. 7(3), 297–337 (2006)CrossRefGoogle Scholar
  17. 17.
    Mori, M.: The uncanny valley. Energy 7(4), 33–35 (1970)Google Scholar
  18. 18.
    Pandzic, I.S., Forchheimer, R.: MPEG-4 Facial Animation: The Standard, Implementation and Applications. Wiley, Hoboken (2003)Google Scholar
  19. 19.
    Parke, F.I., Waters, K.: Computer Facial Animation. AK Peters/CRC Press (2008)Google Scholar
  20. 20.
    Roesch, E.B., Sander, D., Mumenthaler, C., Kerzel, D., Scherer, K.R.: Psychophysics of emotion: the quest for emotional attention. J. Vision 10(3), 4–4 (2010)CrossRefGoogle Scholar
  21. 21.
    Schiffman, H.R.: Sensation and Perception: An Integrated Approach. Wiley, Hoboken (1990)Google Scholar
  22. 22.
    Tomkins, S.: Affect, Imagery, Consciousness, vol. 1. Springer, New York (1962)Google Scholar
  23. 23.
    Villagrasa, S., Sánchez, S., et al.: FACe! 3D facial animation system based on FACS. In: IV Iberoamerican Symposium in Computer Graphics, pp. 203–209 (2009).  https://doi.org/10.1002/9780470682531.pat0170
  24. 24.
    Vinciarelli, A., et al.: Bridging the gap between social animal and unsocial machine: a survey of social signal processing. IEEE Trans. Affect. Comput. 3(1), 69–87 (2012)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.PHuSe Lab - Dipartimento di InformaticaUniversity of MilanMilanItaly

Personalised recommendations