Skip to main content

Facial Feature Tracking for Emotional Dynamic Analysis

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 6915))

Abstract

This article presents a feature-based framework to automatically track 18 facial landmarks for emotion recognition and emotional dynamic analysis. With a new way of using multi-kernel learning, we combine two methods: the first matches facial feature points between consecutive images and the second uses an offline learning of the facial landmark appearance. Matching points results in a jitter-free tracking and the offline learning prevents the tracking framework from drifting. We train the tracking system on the Cohn-Kanade database and analyze the dynamic of emotions and Action Units on the MMI database sequences. We perform accurate detection of facial expressions temporal segment and report experimental results.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Cootes, T.F., Edwards, G.J., Taylor, C.J.: Active appearance models. In: Burkhardt, H., Neumann, B. (eds.) ECCV 1998. LNCS, vol. 1407, p. 484. Springer, Heidelberg (1998)

    Google Scholar 

  2. Al Haj, M., Orozco, J., Gonzalez, J., Villanueva, J.: Automatic face and facial features initialization for robust and accurate tracking. In: ICPR 2008, pp. 1–4 (2008)

    Google Scholar 

  3. Zhou, M., Liang, L., Sun, J., Wang, Y., Beijing, C.: Aam based face tracking with temporal matching and face segmentation. In: CVPR 2010, pp. 701–708 (2010)

    Google Scholar 

  4. Blanz, V., Vetter, T.: Face recognition based on fitting a 3d morphable model. In: PAMI (2003)

    Google Scholar 

  5. Cristinacce, D., Cootes, T.: Feature detection and tracking with constrained local models. In: BMVC 2006, pp. 929–938 (2006)

    Google Scholar 

  6. Lucas, B., Kanade, T.: An iterative image registration technique with an application to stereo vision. In: IJCAI 1981, vol. 3, pp. 674–679 (1981)

    Google Scholar 

  7. Zhu, Z., Ji, Q., Fujimura, K., Lee, K.: Combining kalman filtering and mean shift for eye tracking under active ir illumination. Pattern Recognition 4 (2002)

    Google Scholar 

  8. Tian, Y., Kanade, T., Cohn, J.: Dual-state parametric eye track. In: FG 2000 (2000)

    Google Scholar 

  9. Tian, Y., Kanade, T., Cohn, J.: Recognizing upper face action units for facial expression analysis. In: CVPR 2000, pp. 294–301 (2000)

    Google Scholar 

  10. Kanade, T., Tian, Y., Cohn, J.: Comprehensive database for facial expression analysis. In: FG 2000, p. 46 (2000)

    Google Scholar 

  11. Pantic, M., Valstar, M., Rademaker, R., Maat, L.: Web-based database for facial expression analysis. In: ICME 2005, p. 5 (2005)

    Google Scholar 

  12. Scholkopf, B., Smola, A.J.: Learning with kernels. MIT Press, Cambridge (2002)

    MATH  Google Scholar 

  13. Lanckriet, G.R.G., Cristianini, N., Bartlett, P., Ghaoui, L.E., Jordan, M.I.: Learning the kernel matrix with semidefinite programming. JMLR 5, 27 (2004)

    MathSciNet  MATH  Google Scholar 

  14. Rakotomamonjy, A., Bach, F., Canu, S., Grandvalet, Y.: Simplemkl. JMLR (2008)

    Google Scholar 

  15. Rapp, V., Senechal, T., Bailly, K., Prevost, L.: Multiple kernel learning svm and statistical validation for facial landmark detection. In: FG 2011 (to appear, 2011)

    Google Scholar 

  16. Valstar, M., Pantic, M.: Fully automatic facial action unit detection and temporal analysis. In: CVPRW 2006, p. 149. IEEE, Los Alamitos (2006)

    Google Scholar 

  17. Koelstra, S., Pantic, M., Patras, I.: A dynamic texture based approach to recognition of facial actions and their temporal models. In: PAMI (2010)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Senechal, T., Rapp, V., Prevost, L. (2011). Facial Feature Tracking for Emotional Dynamic Analysis. In: Blanc-Talon, J., Kleihorst, R., Philips, W., Popescu, D., Scheunders, P. (eds) Advanced Concepts for Intelligent Vision Systems. ACIVS 2011. Lecture Notes in Computer Science, vol 6915. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-23687-7_45

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-23687-7_45

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-23686-0

  • Online ISBN: 978-3-642-23687-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics