Abstract
Temporal modeling of facial expression has been the interest of various fields of studies such as expression recognition, realism in computer animation and behavioral study in psychological field. While various researches are actively being conducted to capture the movement of facial features for its temporal property, works in term of the head movement during the facial expression process is lacking. The absence of head movement description will make expression description to be incomplete especially in expression that involves head movement such as disgust. Therefore, this paper proposes a method to track the movement of the head by using a dual pivot head tracking system (DPHT). In proving its usefulness, the tracking system will then be applied to track the movement of subjects depicting disgust. A simple statistical two-tailed analysis and visual rendering comparison will be made with a system that uses only a single pivot to illustrate the practicality of using DPHT. Results show that better depictions of expression can be implemented if the movement of the head is incorporated in the facial expression study.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
Ansari, A., & Abdel-Mottaleb, M. (2005). Automatic facial feature extraction and 3D face modeling using two orthogonal views with application to 3D face recognition. Pattern Recognition, 38(2005), 2549–2563.
Arya, A., & DiPaola, S. (2007). Face modeling and animation language for MPEG-4 XMT framework. IEEE Transactions on Multimedia, 9(6), 1137–1146.
Bartlett, M.A., Hager, J.C., Ekman P., & Sejnowski, T. (Mar 1999). Measuring facial expressions by computer image analysis. International Journal of Psychophysiology, 36(2), 253–263.
Cohn, J.F., Schmidt, K., Gross, R., & Ekman, P. (2002). Individual differences in facial expression: Stability over time, relation to self-reported emotion and ability to inform person identification. In IEEE International Conference on Multimodal Interfaces (ICMI 2002), pp. 491–496. PA: Pittsburgh.
Devernay, F., & Faugeras, O. (2001). Straight lines have to be straight. Machine vision and application (pp. 14–24). Springer.
Dornaika, F., & Ahlberg, J. (2004). Face and facial feature tracking using deformable models. International Journal of Image and Graphics (IJIG), 4(3), 499–532.
Efford, N. (2000). Digital image processing – a practical introduction using Java. Essex, England: Addison Wesley.
Ekman, P., & Friesen, W.V. (1978). Facial action coding system – a technique for the measurement of facial movement. Palo Alto California: Consulting Psychologist Press.
Ekman, P., & Rosenberg, E. (1997). What face reveals – basic and applied studies of spontaneous expression using the facial action coding system (FACS). New York: Oxford University Press.
Ekman, P., Friesen, W.V., Joseph, C. (2002). Facial action coding system – the manual on CD ROM. Utah, USA: Research Nexus Division of Network Information Research Corporation.
Essa, I.A., & Pentland, A.P. (1995). Facial expression recognition using a dynamic model and motion energy. In Proceedings of the Fifth International Conference on Computer Vision, pp. 360–367. USA.
Fua, P., Plankers, R., & Thalmann, D. (1999). From synthesis to analysis: Fitting human animation models to image data. In Proceedings of the Computer Graphics International, pp. 4–11.
Hsu, S.H., Kao, C.K., & Wu, M. (2009). Design facial appearance for roles in video games. Expert System with Applications, 36(3), 4929–4934, Pergamon Press.
Jones, A., & Bonney, S. (2000). 3D Studio Max 3 – Professional Animation. New Riders, Indiana, USA.
Kapoor, A., & Picard, R.W. (2001). A real-time head nod and shake detector. In Proceedings of the 2001 workshop on perceptive user interfaces, pp. 1–5. Orlando, Florida, USA.
Lee, S., & Terzopoulos, D. (2006). Heads up! Biomechanical modeling and neuromuscular control of the neck. ACM SIGGRAPH 2006 papers (pp. 1188–1198). ACM.
Lim, I.S., & Thalmann, D. (2002). Construction of animation models out of captured data. In Proceedings of the IEEE International Conference on Multimedia and Expo, pp.829–832.
Mao, C., Qin, S.F., & Wright, D. (2006). Sketching-out virtual humans: From 2D storyboarding to immediate 3D character animation. In Proceedings of the International Conference on Advances in Computer Entertainment Technology, pp. 61.
Marieb, E. (2006). Essentials of human anatomy and physiology (pp. 150–151). San Fransisco, USA: Pearson.
Osipa, J. (2007). Stop staring – face modeling and animation done right. Indiana, USA: Sybex, Wiley.
Pandzic, I.S., & Forenheimer R. (Ed.) (2002). MPEG-4 facial animation – the standard, implementation and application. England: Wiley.
Pantic, M., & Patras, I. (2004). Temporal modeling of facial actions from face profile image sequences. In IEEE International Conference on Multimedia and Expo, pp. 49–52.
Pantic, M., & Patras, I. (2005). Detecting facial actions and their temporal segments in nearly frontal-view face image sequences. In IEEE International Conference on Systems, Man and Cyberneticsi pp. 3358–3363.
Park, I.K., Zhang, H., & Vezhevets, V. (2005). Image-based 3D face modeling system. EURASIP Journal on Applied Signal Processing, 13, 2072–2090.
Parkinson, B. (1995). Ideas and realities of emotion. London, U.K.: Routledge.
Ralf, P., Fua, P., Apuzzo, N.D. (1999). Automated body modeling for video sequences. In Proceedings of the IEEE International Workshop on Modelling People, pp. 45–52.
Raouzaiou, A., Tsapatsoulis, N., Karpouzis K., & Kollias, S. (2002). Parameterized facial expression synthesis based on MPEG-4 EURASIP. Journal on Applied Signal Processing, 10, 1021–1038.
Ratner, P. (2003). 3-D Human modeling and animation. New Jersey, USA: Wiley.
Reisenzein, R., Bördgen, S., Holtbernd, T., & Matz, D. (2006). Evidence for strong dissociation between emotion and facial displays: The case of surprise. Journal of Personality and Social Psychology, 91(2), 295–315, American Psychological Association, United States.
Roberts, S. (2004). Character animation – 2D skills for better 3D. Massachusetts, USA: Focal.
Sato, H., Ohya, J., & Terashima, N. (2004). Realistic 3D facial animation using parameter-based deformation and texture mapping. In Proceedings of the 6th IEEE International Conference on Automatic Face and Gesture Recognition, pp. 735–742.
Seyedarabi, H., Aghagolzadeh, A., & Khanmohammadi, S. (2004). Facial expressions animation and lip tracking using facial characteristic points and deformable model. International Journal of Information Technology, 1(4), 165–168.
Tian, Y., Kanade, T., & Cohn, J.F. (2001). Recognizing AU fro facial expression analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(2), 97–115.
Utsumi, A., Kawato, S., & Abe, S. (2005). Attention monitoring based on temporal-signal behavior structures. In Proceedings of the IEEE International Workshop on Human-Computer Interaction, pp. 100–109.
Valstar, M., & Pantic, M. (2006). Fully automatic facial action unit detection and temporal analysis. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshop, pp. 149–156.
Yip, B., & Jin, J.S. (2004). Viewpoint determination and pose determination of human head in video conferencing based on head movement. In Proceedings of the 10th International Multi-Media Modeling Conference, pp. 130–135. Brisbane, Australia.
Yusoff, F., Rahmat, R., Sulaiman, M., Shaharom, M., & Majid, H. (2009). 3D based head movement tracking for incorporation in facial expression system. International Journal of Computer Science and Network Security, 9(2), 417–424.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2009 Springer Science+Business Media B.V.
About this chapter
Cite this chapter
Yusoff, F.H., Rahmat, R.W.O.K., Sulaiman, M.N., Shaharom, M.H., Majid, H.S.A. (2009). Head Movement Quantification and Its Role in Facial Expression Study. In: Huang, X., Ao, SI., Castillo, O. (eds) Intelligent Automation and Computer Engineering. Lecture Notes in Electrical Engineering, vol 52. Springer, Dordrecht. https://doi.org/10.1007/978-90-481-3517-2_10
Download citation
DOI: https://doi.org/10.1007/978-90-481-3517-2_10
Published:
Publisher Name: Springer, Dordrecht
Print ISBN: 978-90-481-3516-5
Online ISBN: 978-90-481-3517-2
eBook Packages: EngineeringEngineering (R0)