Advertisement

Real-Time Emotion Recognition Using Biologically Inspired Models

  • Keith Anderson
  • Peter W. McOwan
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2688)

Abstract

A fully automated, multi-stage architecture for emotion recognition is presented. Faces are located using a tracker based upon the ratio template algorithm [1]. Optical flow of the face is subsequently determined using a multi-channel gradient model [2]. The speed and direction information produced is then averaged over different parts of the face and ratios taken to determine how facial parts are moving relative to one another. This information is entered into multi-layer perceptrons trained using back propagation. The system then allocates any facial expression to one of four categories, happiness, sadness, surprise, or disgust. The three key stages of the architecture are all inspired by biological systems. This emotion recognition system runs in real-time and has a range of applications in the field of humancomputer interaction.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    Sinha P.: Perceiving and Recognising three-dimensional forms, PhD dissertation, M.I.T. Available at http://theses.mit.edu:80/Dienst/UI/2.0/Describe/ 0018.mit.theses%2f1995-70?abstract
  2. [2]
    Johnston A., McOwan P.W., Benton C.P.: Robust Velocity Computation from a Biologically Motivated Model of Motion Perception. Proceedings of the Royal Society of London, Vol. 266. (1999) 509–518Google Scholar
  3. [3]
    Picard R.W.: Towards Agents that Recognize Emotion. Actes Proceedings IMAGINA. (1998) 153–155Google Scholar
  4. [4]
    Bartlett M.S., Huger J.C., Ekman P., Sejnowski T.J.: Measuring Facial Expressions by Computer Image Analysis. Psychophysiology, Vol. 36. (1999) 253–263CrossRefGoogle Scholar
  5. [5]
    Bartlett M.S., Donato G., Movellan J.R., Huger J.C., Ekman P., Sejnowski T.J.: Face Image Analysis for Expression Measurement and Detection of Deceit. Proceedings of the 6th Annual Joint Symposium on Neural Computation. (1999)Google Scholar
  6. [6]
    Ogden, B.: Interactive Vision in Robot-Human Interaction. Progression Report. (2001) 42–55Google Scholar
  7. [7]
    Himer W., Schneider F., Kost G., Heimann H.: Computer-based Analysis of Facial Action: A New Approach. Journal of Psychophysiology, Vol 5(2). (1991) 189–195Google Scholar
  8. [8]
    Yakoob Y., Davis L.: Recognizing Facial Expressions by Spatio-Temporal Analysis. IEEE CVPR. (1993) 70–75Google Scholar
  9. [9]
    Rosenblum M., Yakoob Y., Davis L.: Human Emotion Recognition from Motion Using a Radial Basis Function Network Architecture. IEEE Workshop on Motion of Non-Rigid and Articulated Objects. (1994)Google Scholar
  10. [10]
    Lien J.J, Kanade T., Cohn J.F., Li C.: Automated Facial Expression Recognition Based on FACS Action Units. Third IEEE International Conference on Automatic Face and Gesture Recognition. (1998) 390–395Google Scholar
  11. [11]
    Essa I.A., Pentland A.P.: Coding, Analysis, Interpretation, and Recognition of Facial Expressions. IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 19(7). (1997) 757–763CrossRefGoogle Scholar
  12. [12]
    Lien J.J, Kanade T., Cohn J.F., Li C.: Automated Facial Expression Recognition Based on FACS Action Units. Third IEEE International Conference on Automatic Face and Gesture Recognition. (1998) 390–395Google Scholar
  13. [13]
    Tian Y., Kanade T., Cohn J.F.: Recognizing Action Units for Facial Expression Analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 23 (2). (2001) 390–395CrossRefGoogle Scholar
  14. [14]
    Ekman P., Friesen W.: Facial Action Coding System: A Technique for the Measurement of Facial Movement. Consulting Psychologists Press, Palo Alto, CA. (1978)Google Scholar
  15. [15]
    McOwan P.W., Benton C., Dale J., Johnston A.: A Multi-differential Neuromorphic Approach to Motion Detection, International Journal of Neural Systems, Vol. 9. (1999) 429–434CrossRefGoogle Scholar
  16. [16]
    Scassellati B.: Eye Finding via Face Detection for a Foveated, Active Vision System. Proceedings of the Fifteenth National Conference on Artificial Intelligence. (1998)Google Scholar
  17. [17]
    Anderson K., McOwan P.W.: Robust Real-Time Face Tracker for Cluttered Environments. Submitted to Computer Vision & Image Understanding.Google Scholar
  18. [18]
    Tovée M.J.: An Introduction to the Visual System, Cambridge University Press. (1996)Google Scholar
  19. [19]
    Hietanen J.K.: Does your gaze direction and head orientation shift my visual attention?, Neuroreport, Vol. 10. (1999) 3443–3447CrossRefGoogle Scholar
  20. [20]
    Hill H., Johnston A.: Categorising Sex and Identity from the Biological Motion of Faces. Current Biology, Vol. 11. (2001) 880–885CrossRefGoogle Scholar
  21. [21]
    Lien J.J.J., Kanade T., Cohn J.F., Li C.C.: Detection, Tracking, and Classification of Subtle Changes in Facial Expression, Journal of Robotics and Autonomous Systems, Vol. 31. (2000) 131–146CrossRefGoogle Scholar
  22. [22]
    Cohn J.F., Zlochower A., Lien J., Kanade T.: “Automated face analysis by feature point tracking has high concurrent validity with manual FACS coding” Psychophysiology, Vol. 36. (1999), 35–43CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2003

Authors and Affiliations

  • Keith Anderson
    • 1
  • Peter W. McOwan
    • 1
  1. 1.Department of Computer ScienceQueen Mary College University of LondonLondonUK

Personalised recommendations