Skip to main content

Reflecting User Faces in Avatars

  • Conference paper
Intelligent Virtual Agents (IVA 2010)

Abstract

This paper presents a model to generate personalized facial animations for avatars using Performance Driven Animation (PDA). This approach allows the users to reflect their face expressions in his/her avatar, considering as input a small set of feature points provided by Computer Vision (CV) tracking algorithms. The model is based on the MPEG-4 Facial Animation standard, and uses a hierarchy of the animation parameters to provide animation of face regions where it lacks CV data. To deform the face, we use two skin mesh deformation methods, which are computationally cheap and provide avatar animation in real time. We performed an evaluation with subjects in order to qualitatively evaluate our method. Results show that the proposed model can generate coherent and visually satisfactory animations.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Parke, F.I.: Computer generated animation of faces. In: ACM 1972: Proceedings of the ACM annual conference, pp. 451–457. ACM, New York (1972)

    Google Scholar 

  2. Perlin, K.: Layered compositing of facial expression. In: ACM SIGGRAPH - Technical Sketch (1997)

    Google Scholar 

  3. Vilhjálmsson, H., Cantelmo, N., Cassell, J., Chafai, N.E., Kipp, M., Kopp, S., Mancini, M., Marsella, S., Marshall, A.N., Pelachaud, C., Ruttkay, Z., Thórisson, K.R., van Welbergen, H., van der Werf, R.J.: The behavior markup language: Recent developments and challenges. In: Pelachaud, C., Martin, J.-C., André, E., Chollet, G., Karpouzis, K., Pelé, D. (eds.) IVA 2007. LNCS (LNAI), vol. 4722, pp. 99–111. Springer, Heidelberg (2007)

    Chapter  Google Scholar 

  4. Zammitto, V., DiPaola, S., Arya, A.: A methodology for incorporating personality modeling in believable game characters. In: 4th Intl. Conf. on Games Research and Development (CyberGames), p. 2 (2008)

    Google Scholar 

  5. Williams, L.: Performance-driven facial animation. In: SIGGRAPH 1990: Proceedings of the 17th annual conference on Computer graphics and interactive techniques, pp. 235–242. ACM, New York (1990)

    Google Scholar 

  6. Dutreve, L., Meyer, A., Bouakaz, S.: Feature points based facial animation retargeting. In: VRST 2008: Proceedings of the, ACM symposium on Virtual reality software and technology, pp. 197–200. ACM, New York (2008)

    Chapter  Google Scholar 

  7. Chai, J.: x., Xiao, J., Hodgins, J.: Vision-based control of 3d facial animation. In: SCA 2003 Proceedings of the 2003 ACM SIGGRAPH/Eurographics symposium on Computer animation, Aire-la-Ville, Switzerland, Switzerland, Eurographics Association, pp. 193–206 (2003)

    Google Scholar 

  8. Pandzic, I.S., Forchheimer, R. (eds.): MPEG-4 Facial Animation: The Standard, Implementation and Applications. John Wiley & Sons, Inc., New York (2003)

    Google Scholar 

  9. Khanam, A., Mufti, M.: Intelligent expression blending for performance driven facial animation. IEEE Transactions on onsumer Electronics 53(2), 578–584 (2007)

    Article  Google Scholar 

  10. Tang, H., Huang, T.: Mpeg4 performance-driven avatar via robust facial motion tracking, pp. 249–252 (2008)

    Google Scholar 

  11. Quax, P., Di Fiore, F., Lamotte, W., Van Reeth, F.: Efficient distribution of emotion-related data through a networked virtual environment architecture. Computer Animation and Virtual Worlds 9999(9999), n/a+ (2009)

    Google Scholar 

  12. Balci, K.: Xface: Mpeg-4 based open source toolkit for 3d facial animation. In: AVI 2004: Proceedings of the working conference on Advanced visual interfaces, pp. 399–402. ACM Press, New York (2004)

    Chapter  Google Scholar 

  13. Yong Noh, J., Fidaleo, D., Neumann, U.: Animated deformations with radial basis functions. In: VRST 2000: Proceedings of the ACM symposium on Virtual reality software and technology, pp. 166–174. ACM, New York (2000)

    Google Scholar 

  14. Lucas, B.D., Kanade, T.: An iterative image registration technique with an application to stereo vision. In: IJCAI 1981: Proceedings of the 7th international joint conference on Artificial intelligence, pp. 674–679. Morgan Kaufmann Publishers Inc., San Francisco (1981)

    Google Scholar 

  15. Ekman, P., Friesen, W.: Facial Action Code System. Consulting Psychologists Press, Inc., Palo Alto (1978)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2010 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Baptista Queiroz, R. et al. (2010). Reflecting User Faces in Avatars. In: Allbeck, J., Badler, N., Bickmore, T., Pelachaud, C., Safonova, A. (eds) Intelligent Virtual Agents. IVA 2010. Lecture Notes in Computer Science(), vol 6356. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-15892-6_46

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-15892-6_46

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-15891-9

  • Online ISBN: 978-3-642-15892-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics