Skip to main content

Active Shape Model vs. Deep Learning for Facial Emotion Recognition in Security

  • Conference paper
  • First Online:
Multimodal Pattern Recognition of Social Signals in Human-Computer-Interaction (MPRSS 2016)

Abstract

As Facial Emotion Recognition is becoming more important everyday, A research experiment was conducted to find the best approach for Facial Emotion Recognition. Deep Learning (DL) and Active Shape Model (ASM) were tested. Researchers have worked with Facial Emotion Recognition in the past, with both Deep learning and Active Shape Model, with wanting to find out which approach is better for this kind of technology. Both methods were tested with two different datasets and our findings were consistent. Active shape Model was better when tested versus Deep Learning. However, Deep Learning was faster, and easier to implement, which means with better Deep Learning software, Deep Learning will be better in recognizing and classifying facial emotions. For this experiment Deep Learning showed accuracy for the CAFE dataset by 60% whereas Active Shape Model showed accuracy at 93%. Likewise with the JAFFE dataset; Deep Learning showed accuracy at 63% and Active Shape Model showed accuracy at 83%.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. NVIDIA DIGITS (2015). https://developer.nvidia.com/digits, Accessed 08 Jul 2016

  2. CyberSAFE@UALR (n.d.). https://sites.google.com/a/ualr.edu/cs-reu-site-ualr/, Accessed 08 Jul 2016

  3. Lecun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998). doi:10.1109/5.726791

    Article  Google Scholar 

  4. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet Classification with Deep Convolutional Neural Networks (n.d.)

    Google Scholar 

  5. Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Rabinovich, A.: Going deeper with convolutions. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) ((2015)). doi:10.1109/cvpr.2015.7298594

  6. Fasel, B., Luettin, J.: Automatic facial expression analysis: a survey. Pattern Recogn. 36, 259–275 (2003)

    Article  MATH  Google Scholar 

  7. Pantic, M., Rothkrantz, L.J.M.: Automatic analysis of facial expressions: the state of the art. IEEE Trans. Pattern Anal. Mach. Intell. 22, 1424–1445 (2000)

    Article  Google Scholar 

  8. Paleari, M., Chellali, R., Huet, B.: Features for multimodal emotion recognition: an extensive study. In: IEEE Conference on Cybernetics and Intelligent Systems, pp. 90-95 (2010)

    Google Scholar 

  9. Murugappan, M., Rizon, M., Nagarajan, R., Yaacob, S., Hazry, D., Zunaidi, I.: Time-frequency analysis of EEG signals for human emotion detection. In: 4th Kuala Lumpur International Conference on Biomedical Engineering (2008)

    Google Scholar 

  10. Arı, İ., Akarun, L.: Facial feature tracking and expression recognition for sign language, In: IEEE, Signal Processing and Communications Applications, Antalya (2009)

    Google Scholar 

  11. Akakın, H.Ç., Sankur, B.: Spatiotemporal features for effective facial expression recognition. In: IEEE 11th European Conference on Computer Vision, Workshop on Sign Gesture Activity (2010)

    Google Scholar 

  12. Kumano, S., Otsuka, K., Yamato, J., Maeda, E., Sato, Y.: Pose-invariant facial expression recognition using variable-intensity templates. Int. J. Comput. Vis. 83, 178–194 (2008)

    Article  Google Scholar 

  13. Sebe, N., Lew, M.S., Sun, Y., Cohen, I., Gevers, T., Huang, T.S.: Authentic facial expression analysis. Image Vis. Comput. 25, 1856–1863 (2007)

    Article  Google Scholar 

  14. Littlewort, G., Bartlett, M.S., Fasel, I., Susskind, J., Movellan, J.: Dynamics of facial expression extracted automatically from video. Image Vis. Comput. 24, 615–625 (2006)

    Article  Google Scholar 

  15. Shan, C., Gong, S., McOwan, P.W.: Facial expression recognition based on local binary patterns: a comprehensive study. Image Vis. Comput. 27, 803–816 (2009)

    Article  Google Scholar 

  16. Busso, C., Deng, Z., Yildirim, S., Bulut, M., Lee, C.M., Kazemzadeh, A., Lee, S., Neumann, U., Narayanan, S.: Analysis of emotion recognition using facial expressions, speech and multimodal information. In: Proceedings of the 6th International Conference on Multimodal Interfaces - ICMI 2004, New York (2004)

    Google Scholar 

  17. Shaohua, W., Aggarwal, J.K.: Spotaneous facial expression recognition: a robust metric learning approach, Computer Vision Research Center, The University of Texas at Austin, Austin, TX 78712–1084. US, Pattern Recognition 47 (2014)

    Google Scholar 

  18. Mao, Q., Xinyu, P., Zhan, Y., Xiangiun, S.: Usinng Kinect for real time emotion recognition via facial expression. Front. Inf. Technol. Electron. Eng. 16(4), 272–282 (2015)

    Article  Google Scholar 

  19. Anwar, S., Milanova, M., Bigazzi, A., Bocchi, L., Guazzini, A.: Real Time Intention Recognition IEEE IECON 2016, Florence, 24–28 October 2016

    Google Scholar 

  20. Socher, R., Huval, B., Bhat, B., Manning, C.D., Ng, A.Y.: Convolutional-Recursive Deep Learning for 3D Object Classification, pp. 1–9 (n.d.)

    Google Scholar 

  21. Le, Q.V., Ngiam, J., Coates, A., Lahiri, A., Prochnow, B., Ng, A.Y.: On Optimization Methods for Deep Learning (n.d.)

    Google Scholar 

  22. Arı, İ., Açıköz, Y.: Fast image annotation with Pinotator. In: IEEE 19th Signal Processing and Communications Applications Conference (2011)

    Google Scholar 

Download references

Acknowledgment

This project is supported by the National Science Foundation under the award CNS1359323 This project is funded by the National Science Foundation for Undergraduate students. (https://sites.google.com/a/ualr.edu/cs-reu-site-ualr/).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mariofanna Milanova .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Bebawy, M., Anwar, S., Milanova, M. (2017). Active Shape Model vs. Deep Learning for Facial Emotion Recognition in Security. In: Schwenker, F., Scherer, S. (eds) Multimodal Pattern Recognition of Social Signals in Human-Computer-Interaction. MPRSS 2016. Lecture Notes in Computer Science(), vol 10183. Springer, Cham. https://doi.org/10.1007/978-3-319-59259-6_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-59259-6_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-59258-9

  • Online ISBN: 978-3-319-59259-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics