Advertisement

Human motions and emotions recognition inspired by LMA qualities

  • Insaf AjiliEmail author
  • Malik Mallem
  • Jean-Yves Didier
Original Article

Abstract

The purpose of this paper is to describe human motions and emotions that appear on real video images with compact and informative representations. We aimed to recognize expressive motions and analyze the relationship between human body features and emotions. We propose a new descriptor vector for expressive human motions inspired from the Laban movement analysis method (LMA), a descriptive language with an underlying semantics that allows to qualify human motion in its different aspects. The proposed descriptor is fed into a machine learning framework including, random decision forest, multi-layer perceptron and two multiclass support vector machines methods. We evaluated our descriptor first for motion recognition and second for emotion recognition from the analysis of expressive body movements. Preliminary experiments with three public datasets, MSRC-12, MSR Action 3D and UTkinect, showed that our model performs better than many existing motion recognition methods. We also built a dataset composed of 10 control motions (move, turn left, turn right, stop, sit down, wave, dance, introduce yourself, increase velocity, decrease velocity). We tested our descriptor vector and achieved high recognition performance. In the second experimental part, we evaluated our descriptor with a dataset composed of expressive gestures performed with four basic emotions selected from Russell’s Circumplex model of affect (happy, angry, sad and calm). The same machine learning methods were used for human emotions recognition based on expressive motions. A 3D virtual avatar was introduced to reproduce human body motions, and three aspects were analyzed (1) how expressed emotions are classified by humans, (2) how motion descriptor is evaluated by humans, (3) what is the relationship between human emotions and motion features.

Keywords

Motion recognition Emotion recognition Laban movement analysis Features importance Machine learning Human perception 

Notes

Acknowledgements

We would like to thank the staff of the University of Evry Val d’Essonne for participating in our datasets. This work was partially supported by the Strategic Research Initiatives project iCODE accredited by University Paris Saclay.

Supplementary material

371_2018_1619_MOESM1_ESM.rar (2.7 mb)
Supplementary material 1 (rar 2749 KB)

References

  1. 1.
    De Gelder, B.: Why bodies? Twelve reasons for including bodily expressions in affective neuroscience. Philos. Trans. R. Soc. B Biol. Sci. 364(1535), 3475–3484 (2009)CrossRefGoogle Scholar
  2. 2.
    Russell, J.A.: Is there universal recognition of emotion from facial expression? A review of the cross-cultural studies. Psychol. Bull. 115, 102–141 (1994)CrossRefGoogle Scholar
  3. 3.
    Paul, E.: Facial Expressions. Handbook of Cognition and Emotion, vol. 16, pp. 301–320. Wiley-lackwell, New Jersey (2005)Google Scholar
  4. 4.
    Aviezer, H., Hassin, R., Ryan, J., Grady, C., Susskind, J., Anderson, A., Moscovitch, M., Bentin, S.: Angry, disgusted, or afraid? Studies on the malleability of emotion perception. Psychol. Sci. 19(7), 724–32 (2008)CrossRefGoogle Scholar
  5. 5.
    Aviezer, H., Bentin, S., Dudarev, V., Hassin, R.: The automaticity of emotional face-context integration. Emotion 11(6), 1406–14 (2011)CrossRefGoogle Scholar
  6. 6.
    Ajili, I., Mallem, M., Didier, J.Y.: Robust human action recognition system using Laban movement analysis. Procedia Comput. Sci. Knowl. Based Intell. Inf. Eng. Syst. 112, 554–563 (2017)Google Scholar
  7. 7.
    Ajili, I., Mallem, M., Didier, J.Y.: Gesture recognition for humanoid robot teleoperation. In: 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), pp. 1115–1120 (2017)Google Scholar
  8. 8.
    Russel, James, A.: A circumplex model of affect. J. Personal. Soc. Psychol. 39(6), 1161 (1980)CrossRefGoogle Scholar
  9. 9.
    Gong, D., Medioni, G., Zhao, X.: Structured time series analysis for human action segmentation and recognition. IEEE Trans. Pattern Anal. Mach. Intell. 36(7), 1414–1427 (2014)CrossRefGoogle Scholar
  10. 10.
    Junejo, I.N., Junejo, K.N., Al Aghbari, Z.: Silhouette-based human action recognition using SAX-Shapes. Vis. Comput. 30(3), 259–269 (2014)CrossRefGoogle Scholar
  11. 11.
    Jiang, X., Zhong, F., Peng, Q., Qin, X.: Online robust action recognition based on a hierarchical model. Vis. Comput. 30(9), 1021–1033 (2014)CrossRefGoogle Scholar
  12. 12.
    Wang, H., Kläser, A., Schmid, C., Liu, C.L.: Dense trajectories and motion boundary descriptors for action recognition. Int. J. Comput. Vis. 103(1), 60–79 (2013)MathSciNetCrossRefGoogle Scholar
  13. 13.
    Xia, L., Aggarwal, J.K.: Spatio-temporal depth cuboid similarity feature for activity recognition using depth camera. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 2834–2841 (2013)Google Scholar
  14. 14.
    Oreifej, O., Liu, Z.: HON4D: Histogram of oriented 4D normals for activity recognition from depth sequences. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 716–723 (2013)Google Scholar
  15. 15.
    Chi, D., Costa, M., Zhao, L., Badler, N.: The EMOTE model for effort and shape. In: Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH ’00, pp. 173–182 (2000)Google Scholar
  16. 16.
    Kapadia, M., Chiang, I.K., Thomas, T., Badler, N.I., Kider Jr., J.T.: Efficient motion retrieval in large motion databases. In: Proceedings of the ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games, pp. 19–28 (2013)Google Scholar
  17. 17.
    Müller, M., Röder, T., Clausen, M.: Efficient content-based retrieval of motion capture data. ACM Trans. Graph. 24(3), 677–685 (2005)CrossRefGoogle Scholar
  18. 18.
    Durupinar, F., Kapadia, M., Deutsch, S., Neff, M., Badler, N.: PERFORM: perceptual approach for adding OCEAN personality to human motion using laban movement analysis. ACM Trans. Graph. 36(1), 6 (2016)CrossRefGoogle Scholar
  19. 19.
    Hsu, E., Pulli, K., Popović, J.: Style translation for human motion. ACM Trans. Graph. 24(3), 1082–1089 (2005)CrossRefGoogle Scholar
  20. 20.
    Xia, S., Wang, C., Chai, J., Hodgins, J.: Realtime style transfer for unlabeled heterogeneous human motion. ACM Trans. Graph. 34(4), 119:1–119:10 (2015)CrossRefGoogle Scholar
  21. 21.
    Yumer, M.E., Mitra, N.J.: Spectral style transfer for human motion between independent actions. ACM Trans. Graph. 35(4), 137:1–137:8 (2016)CrossRefGoogle Scholar
  22. 22.
    Aristidou, A., Zeng, Q., Stavrakis, E., Yin, K., Cohen-Or, D., Chrysanthou, Y., Chen, B.: Emotion control of unstructured dance movements. In: Symposium on Computer Animation (2017)Google Scholar
  23. 23.
    Aristidou, A., Stavrakis, E., Papaefthimiou, M., Papagiannakis, G., Chrysanthou, Y.: Style-based motion analysis for dance composition. Vis. Comput. 1432–2315 (2017)Google Scholar
  24. 24.
    Rudolf, V.L., Lisa, U.: The Mastery of Movement. Mac Donald and Evans, Boston (1971)Google Scholar
  25. 25.
    Glowinski, D., Dael, N., Camurri, A., Volpe, G., Mortillaro, M., Scherer, K.: Toward a minimal representation of affective gestures. IEEE Trans. Affect. Comput. 2(2), 106–118 (2011)CrossRefGoogle Scholar
  26. 26.
    Bouchar, D., Badler, N.: Semantic segmentation of motion capture using Laban movement analysis. In: Intelligent Virtual Agents, pp. 37–44. Springer, Berlin, Heidelberg (2007)Google Scholar
  27. 27.
    Samadani, A., Burton, S., Gorbet, R., Kulic, D.: Laban effort and shape analysis of affective hand and arm movements. In: Humaine Association Conference on Affective Computing and Intelligent Interaction, pp. 343–348 (2013)Google Scholar
  28. 28.
    Truong, A., Boujut, H., Zaharia, T.: Laban descriptors for gesture recognition and emotional analysis. Vis. Comput. 32(1), 83–98 (2016)CrossRefGoogle Scholar
  29. 29.
    Aristidou, A., Charalambous, P., Chrysanthou, Y.: Emotion analysis and classification: understanding the performers’ emotions using the LMA entities. Comput. Graph. Forum. 34(6), 262–276 (2015)CrossRefGoogle Scholar
  30. 30.
    Senecal, S., Cuel, L., Aristidou, A., Magnenat-Thalman, N.: Continuous body emotion recognition system during theater performances. Comput. Anim. Virtual Worlds 27(3–4), 311–320 (2016)CrossRefGoogle Scholar
  31. 31.
    Cimen, G., Ilhan, H., Capin, T., Gurcay, H.: Classification of human motion based on affective state descriptors. Comput. Anim. Virtual Worlds. 24(3–4), 355–363 (2013)CrossRefGoogle Scholar
  32. 32.
    Fothergill, S., Mentis, H., Kohli, P., Nowozin, S.: Instructing people for training gestural interactive systems. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI ’12, pp. 1737–1746, ACM (2012)Google Scholar
  33. 33.
    Barber, C.B., Dobkin, D.P., Huhdanpaa, H.: The Quickhull algorithm for convex hulls. ACM Trans. Math. Softw. 22(4), 469–483 (1996)MathSciNetCrossRefzbMATHGoogle Scholar
  34. 34.
    Xia, L., Chen, C.-C., Aggarwal, J.K.: View invariant human action recognition using histograms of 3D joints. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, pp. 20–27 (2012)Google Scholar
  35. 35.
    Li, W., Zhang, Z., Liu, Z.: Action recognition based on a bag of 3D points. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition-Workshops, pp. 9–14 (2010)Google Scholar
  36. 36.
    Quinlan, J.R.: Learning With Continuous Classes, pp. 343–348. World Scientific, Singapore (1992)Google Scholar
  37. 37.
    Breiman, L.: Classification and Regression Trees, vol. 358. Wadsworth International Group, Calif (1984)zbMATHGoogle Scholar
  38. 38.
    Diaz-Uriarte, R., Alvare de Andrés, S.: Gene selection and classification of microarray data using random forest. BMC Bioinf. 7(3), 3 (2006)CrossRefGoogle Scholar
  39. 39.
    Hripcsak, G., Rothschild, A.S.: Technical brief: agreement, the F-measure, and reliability in information retrieval. JAMIA 12(3), 296–298 (2005)Google Scholar
  40. 40.
    Lehrmann, A.M., Gehler, P.V., Nowozin, S.: Efficient nonlinear Markov models for human motion. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1314–1321 (2014)Google Scholar
  41. 41.
    Song, Y., Morency, L.P., Davis, R.: Distribution-sensitive learning for imbalanced datasets. In: 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), pp. 1–6 (2013)Google Scholar
  42. 42.
    Truong, A., Zaharia, T.: Dynamic gesture recognition with Laban movement analysis and hidden Markov models. In: Proceedings of the 33rd Computer Graphics International, CGI ’16, ACM, Greece, pp. 21–24 (2016)Google Scholar
  43. 43.
    Slama, R., Wannous, H., Daoudi, M.: Grassmannian representation of motion depth for 3D human gesture and action recognition. In: 22nd International Conference on Pattern Recognition, pp. 3499–3504 (2014)Google Scholar
  44. 44.
    Arlot, S., Celisse, A.: A survey of cross-validation procedures for model selection (2009)Google Scholar
  45. 45.
    Bland, J.M., Altman, D.G.: Statistics notes: Cronbach’s alpha. BMJ 314(7080), 572 (1997)CrossRefGoogle Scholar
  46. 46.
    Knight, H., Thielstrom, R., Simmons, R.: Expressive path shape (swagger): simple features that illustrate a robot’s attitude toward its goal in real time. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1475–1482 (2016)Google Scholar
  47. 47.
    Nishimura, K., Kubota, N., Woo, J.: Design support system for emotional expression of robot partners using interactive evolutionary computation. In: IEEE International Conference on Fuzzy Systems, pp. 1–7 (2012)Google Scholar

Copyright information

© Springer-Verlag GmbH Germany, part of Springer Nature 2018

Authors and Affiliations

  1. 1.IBISCUniv Evry, Université Paris-SaclayEvry CédexFrance

Personalised recommendations