Abstract
As Facial Emotion Recognition is becoming more important everyday, A research experiment was conducted to find the best approach for Facial Emotion Recognition. Deep Learning (DL) and Active Shape Model (ASM) were tested. Researchers have worked with Facial Emotion Recognition in the past, with both Deep learning and Active Shape Model, with wanting to find out which approach is better for this kind of technology. Both methods were tested with two different datasets and our findings were consistent. Active shape Model was better when tested versus Deep Learning. However, Deep Learning was faster, and easier to implement, which means with better Deep Learning software, Deep Learning will be better in recognizing and classifying facial emotions. For this experiment Deep Learning showed accuracy for the CAFE dataset by 60% whereas Active Shape Model showed accuracy at 93%. Likewise with the JAFFE dataset; Deep Learning showed accuracy at 63% and Active Shape Model showed accuracy at 83%.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
NVIDIA DIGITS (2015). https://developer.nvidia.com/digits, Accessed 08 Jul 2016
CyberSAFE@UALR (n.d.). https://sites.google.com/a/ualr.edu/cs-reu-site-ualr/, Accessed 08 Jul 2016
Lecun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998). doi:10.1109/5.726791
Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet Classification with Deep Convolutional Neural Networks (n.d.)
Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Rabinovich, A.: Going deeper with convolutions. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) ((2015)). doi:10.1109/cvpr.2015.7298594
Fasel, B., Luettin, J.: Automatic facial expression analysis: a survey. Pattern Recogn. 36, 259–275 (2003)
Pantic, M., Rothkrantz, L.J.M.: Automatic analysis of facial expressions: the state of the art. IEEE Trans. Pattern Anal. Mach. Intell. 22, 1424–1445 (2000)
Paleari, M., Chellali, R., Huet, B.: Features for multimodal emotion recognition: an extensive study. In: IEEE Conference on Cybernetics and Intelligent Systems, pp. 90-95 (2010)
Murugappan, M., Rizon, M., Nagarajan, R., Yaacob, S., Hazry, D., Zunaidi, I.: Time-frequency analysis of EEG signals for human emotion detection. In: 4th Kuala Lumpur International Conference on Biomedical Engineering (2008)
Arı, İ., Akarun, L.: Facial feature tracking and expression recognition for sign language, In: IEEE, Signal Processing and Communications Applications, Antalya (2009)
Akakın, H.Ç., Sankur, B.: Spatiotemporal features for effective facial expression recognition. In: IEEE 11th European Conference on Computer Vision, Workshop on Sign Gesture Activity (2010)
Kumano, S., Otsuka, K., Yamato, J., Maeda, E., Sato, Y.: Pose-invariant facial expression recognition using variable-intensity templates. Int. J. Comput. Vis. 83, 178–194 (2008)
Sebe, N., Lew, M.S., Sun, Y., Cohen, I., Gevers, T., Huang, T.S.: Authentic facial expression analysis. Image Vis. Comput. 25, 1856–1863 (2007)
Littlewort, G., Bartlett, M.S., Fasel, I., Susskind, J., Movellan, J.: Dynamics of facial expression extracted automatically from video. Image Vis. Comput. 24, 615–625 (2006)
Shan, C., Gong, S., McOwan, P.W.: Facial expression recognition based on local binary patterns: a comprehensive study. Image Vis. Comput. 27, 803–816 (2009)
Busso, C., Deng, Z., Yildirim, S., Bulut, M., Lee, C.M., Kazemzadeh, A., Lee, S., Neumann, U., Narayanan, S.: Analysis of emotion recognition using facial expressions, speech and multimodal information. In: Proceedings of the 6th International Conference on Multimodal Interfaces - ICMI 2004, New York (2004)
Shaohua, W., Aggarwal, J.K.: Spotaneous facial expression recognition: a robust metric learning approach, Computer Vision Research Center, The University of Texas at Austin, Austin, TX 78712–1084. US, Pattern Recognition 47 (2014)
Mao, Q., Xinyu, P., Zhan, Y., Xiangiun, S.: Usinng Kinect for real time emotion recognition via facial expression. Front. Inf. Technol. Electron. Eng. 16(4), 272–282 (2015)
Anwar, S., Milanova, M., Bigazzi, A., Bocchi, L., Guazzini, A.: Real Time Intention Recognition IEEE IECON 2016, Florence, 24–28 October 2016
Socher, R., Huval, B., Bhat, B., Manning, C.D., Ng, A.Y.: Convolutional-Recursive Deep Learning for 3D Object Classification, pp. 1–9 (n.d.)
Le, Q.V., Ngiam, J., Coates, A., Lahiri, A., Prochnow, B., Ng, A.Y.: On Optimization Methods for Deep Learning (n.d.)
Arı, İ., Açıköz, Y.: Fast image annotation with Pinotator. In: IEEE 19th Signal Processing and Communications Applications Conference (2011)
Acknowledgment
This project is supported by the National Science Foundation under the award CNS1359323 This project is funded by the National Science Foundation for Undergraduate students. (https://sites.google.com/a/ualr.edu/cs-reu-site-ualr/).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this paper
Cite this paper
Bebawy, M., Anwar, S., Milanova, M. (2017). Active Shape Model vs. Deep Learning for Facial Emotion Recognition in Security. In: Schwenker, F., Scherer, S. (eds) Multimodal Pattern Recognition of Social Signals in Human-Computer-Interaction. MPRSS 2016. Lecture Notes in Computer Science(), vol 10183. Springer, Cham. https://doi.org/10.1007/978-3-319-59259-6_1
Download citation
DOI: https://doi.org/10.1007/978-3-319-59259-6_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-59258-9
Online ISBN: 978-3-319-59259-6
eBook Packages: Computer ScienceComputer Science (R0)