Skip to main content

Head Movement Quantification and Its Role in Facial Expression Study

  • Chapter
  • First Online:
  • 777 Accesses

Part of the book series: Lecture Notes in Electrical Engineering ((LNEE,volume 52))

Abstract

Temporal modeling of facial expression has been the interest of various fields of studies such as expression recognition, realism in computer animation and behavioral study in psychological field. While various researches are actively being conducted to capture the movement of facial features for its temporal property, works in term of the head movement during the facial expression process is lacking. The absence of head movement description will make expression description to be incomplete especially in expression that involves head movement such as disgust. Therefore, this paper proposes a method to track the movement of the head by using a dual pivot head tracking system (DPHT). In proving its usefulness, the tracking system will then be applied to track the movement of subjects depicting disgust. A simple statistical two-tailed analysis and visual rendering comparison will be made with a system that uses only a single pivot to illustrate the practicality of using DPHT. Results show that better depictions of expression can be implemented if the movement of the head is incorporated in the facial expression study.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   219.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Ansari, A., & Abdel-Mottaleb, M. (2005). Automatic facial feature extraction and 3D face modeling using two orthogonal views with application to 3D face recognition. Pattern Recognition, 38(2005), 2549–2563.

    Article  Google Scholar 

  2. Arya, A., & DiPaola, S. (2007). Face modeling and animation language for MPEG-4 XMT framework. IEEE Transactions on Multimedia, 9(6), 1137–1146.

    Article  Google Scholar 

  3. Bartlett, M.A., Hager, J.C., Ekman P., & Sejnowski, T. (Mar 1999). Measuring facial expressions by computer image analysis. International Journal of Psychophysiology, 36(2), 253–263.

    Article  Google Scholar 

  4. Cohn, J.F., Schmidt, K., Gross, R., & Ekman, P. (2002). Individual differences in facial expression: Stability over time, relation to self-reported emotion and ability to inform person identification. In IEEE International Conference on Multimodal Interfaces (ICMI 2002), pp. 491–496. PA: Pittsburgh.

    Google Scholar 

  5. Devernay, F., & Faugeras, O. (2001). Straight lines have to be straight. Machine vision and application (pp. 14–24). Springer.

    Google Scholar 

  6. Dornaika, F., & Ahlberg, J. (2004). Face and facial feature tracking using deformable models. International Journal of Image and Graphics (IJIG), 4(3), 499–532.

    Article  Google Scholar 

  7. Efford, N. (2000). Digital image processing – a practical introduction using Java. Essex, England: Addison Wesley.

    Google Scholar 

  8. Ekman, P., & Friesen, W.V. (1978). Facial action coding system – a technique for the measurement of facial movement. Palo Alto California: Consulting Psychologist Press.

    Google Scholar 

  9. Ekman, P., & Rosenberg, E. (1997). What face reveals – basic and applied studies of spontaneous expression using the facial action coding system (FACS). New York: Oxford University Press.

    Google Scholar 

  10. Ekman, P., Friesen, W.V., Joseph, C. (2002). Facial action coding system – the manual on CD ROM. Utah, USA: Research Nexus Division of Network Information Research Corporation.

    Google Scholar 

  11. Essa, I.A., & Pentland, A.P. (1995). Facial expression recognition using a dynamic model and motion energy. In Proceedings of the Fifth International Conference on Computer Vision, pp. 360–367. USA.

    Google Scholar 

  12. Fua, P., Plankers, R., & Thalmann, D. (1999). From synthesis to analysis: Fitting human animation models to image data. In Proceedings of the Computer Graphics International, pp. 4–11.

    Google Scholar 

  13. Hsu, S.H., Kao, C.K., & Wu, M. (2009). Design facial appearance for roles in video games. Expert System with Applications, 36(3), 4929–4934, Pergamon Press.

    Article  Google Scholar 

  14. Jones, A., & Bonney, S. (2000). 3D Studio Max 3 – Professional Animation. New Riders, Indiana, USA.

    Google Scholar 

  15. Kapoor, A., & Picard, R.W. (2001). A real-time head nod and shake detector. In Proceedings of the 2001 workshop on perceptive user interfaces, pp. 1–5. Orlando, Florida, USA.

    Google Scholar 

  16. Lee, S., & Terzopoulos, D. (2006). Heads up! Biomechanical modeling and neuromuscular control of the neck. ACM SIGGRAPH 2006 papers (pp. 1188–1198). ACM.

    Google Scholar 

  17. Lim, I.S., & Thalmann, D. (2002). Construction of animation models out of captured data. In Proceedings of the IEEE International Conference on Multimedia and Expo, pp.829–832.

    Google Scholar 

  18. Mao, C., Qin, S.F., & Wright, D. (2006). Sketching-out virtual humans: From 2D storyboarding to immediate 3D character animation. In Proceedings of the International Conference on Advances in Computer Entertainment Technology, pp. 61.

    Google Scholar 

  19. Marieb, E. (2006). Essentials of human anatomy and physiology (pp. 150–151). San Fransisco, USA: Pearson.

    Google Scholar 

  20. Osipa, J. (2007). Stop staring – face modeling and animation done right. Indiana, USA: Sybex, Wiley.

    Google Scholar 

  21. Pandzic, I.S., & Forenheimer R. (Ed.) (2002). MPEG-4 facial animation – the standard, implementation and application. England: Wiley.

    Google Scholar 

  22. Pantic, M., & Patras, I. (2004). Temporal modeling of facial actions from face profile image sequences. In IEEE International Conference on Multimedia and Expo, pp. 49–52.

    Google Scholar 

  23. Pantic, M., & Patras, I. (2005). Detecting facial actions and their temporal segments in nearly frontal-view face image sequences. In IEEE International Conference on Systems, Man and Cyberneticsi pp. 3358–3363.

    Google Scholar 

  24. Park, I.K., Zhang, H., & Vezhevets, V. (2005). Image-based 3D face modeling system. EURASIP Journal on Applied Signal Processing, 13, 2072–2090.

    Google Scholar 

  25. Parkinson, B. (1995). Ideas and realities of emotion. London, U.K.: Routledge.

    Google Scholar 

  26. Ralf, P., Fua, P., Apuzzo, N.D. (1999). Automated body modeling for video sequences. In Proceedings of the IEEE International Workshop on Modelling People, pp. 45–52.

    Google Scholar 

  27. Raouzaiou, A., Tsapatsoulis, N., Karpouzis K., & Kollias, S. (2002). Parameterized facial expression synthesis based on MPEG-4 EURASIP. Journal on Applied Signal Processing, 10, 1021–1038.

    Article  MATH  Google Scholar 

  28. Ratner, P. (2003). 3-D Human modeling and animation. New Jersey, USA: Wiley.

    Google Scholar 

  29. Reisenzein, R., Bördgen, S., Holtbernd, T., & Matz, D. (2006). Evidence for strong dissociation between emotion and facial displays: The case of surprise. Journal of Personality and Social Psychology, 91(2), 295–315, American Psychological Association, United States.

    Article  Google Scholar 

  30. Roberts, S. (2004). Character animation – 2D skills for better 3D. Massachusetts, USA: Focal.

    Google Scholar 

  31. Sato, H., Ohya, J., & Terashima, N. (2004). Realistic 3D facial animation using parameter-based deformation and texture mapping. In Proceedings of the 6th IEEE International Conference on Automatic Face and Gesture Recognition, pp. 735–742.

    Google Scholar 

  32. Seyedarabi, H., Aghagolzadeh, A., & Khanmohammadi, S. (2004). Facial expressions animation and lip tracking using facial characteristic points and deformable model. International Journal of Information Technology, 1(4), 165–168.

    Google Scholar 

  33. Tian, Y., Kanade, T., & Cohn, J.F. (2001). Recognizing AU fro facial expression analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(2), 97–115.

    Article  Google Scholar 

  34. Utsumi, A., Kawato, S., & Abe, S. (2005). Attention monitoring based on temporal-signal behavior structures. In Proceedings of the IEEE International Workshop on Human-Computer Interaction, pp. 100–109.

    Google Scholar 

  35. Valstar, M., & Pantic, M. (2006). Fully automatic facial action unit detection and temporal analysis. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshop, pp. 149–156.

    Google Scholar 

  36. Yip, B., & Jin, J.S. (2004). Viewpoint determination and pose determination of human head in video conferencing based on head movement. In Proceedings of the 10th International Multi-Media Modeling Conference, pp. 130–135. Brisbane, Australia.

    Google Scholar 

  37. Yusoff, F., Rahmat, R., Sulaiman, M., Shaharom, M., & Majid, H. (2009). 3D based head movement tracking for incorporation in facial expression system. International Journal of Computer Science and Network Security, 9(2), 417–424.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Fakhrul Hazman Yusoff .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer Science+Business Media B.V.

About this chapter

Cite this chapter

Yusoff, F.H., Rahmat, R.W.O.K., Sulaiman, M.N., Shaharom, M.H., Majid, H.S.A. (2009). Head Movement Quantification and Its Role in Facial Expression Study. In: Huang, X., Ao, SI., Castillo, O. (eds) Intelligent Automation and Computer Engineering. Lecture Notes in Electrical Engineering, vol 52. Springer, Dordrecht. https://doi.org/10.1007/978-90-481-3517-2_10

Download citation

  • DOI: https://doi.org/10.1007/978-90-481-3517-2_10

  • Published:

  • Publisher Name: Springer, Dordrecht

  • Print ISBN: 978-90-481-3516-5

  • Online ISBN: 978-90-481-3517-2

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics