Advertisement

Skeleton-Based Labanotation Generation Using Multi-model Aggregation

  • Ningwei XieEmail author
  • Zhenjiang Miao
  • Jiaji Wang
Conference paper
  • 92 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12047)

Abstract

Labanotation is a well-known notation system for effective dance recording and archiving. Using computer technology to generate Labanotation automatically is a challenging but meaningful task, while existing methods cannot fully utilize spatial characteristics of human motion and distinguish subtle differences between similar human movements. In this paper, we propose a method based on multi-model aggregation for Labanotation generation. Firstly, two types of feature are extracted, the joint feature and the Lie group feature, which reinforce the representation of human motion data. Secondly, a two-branch network architecture based on Long Short-Term Memory (LSTM) network and LieNet is introduced to conduct effective human movement recognition. LSTM is capable to model long-term dependencies in temporal domain, and LieNet is a powerful network for spatial analysis based on Lie group structure. Within the architecture, the joint feature and the Lie group feature are fed into LSTM model and LieNet model respectively for training. Furthermore, we utilize score fusion methods to fuse the output class scores of the two branches, which performs better than any of the single models, due to complementarity between LSTM and LieNet. In addition, skip connection is applied in the structure of LieNet, which simplifies the training procedure and improves the convergence behavior. Evaluations on standard motion capture dataset demonstrate the effectiveness of proposed method and its superiority compared with previous works.

Keywords

Labanotation Motion capture data LSTM Lie group feature Class score fusion 

Notes

Acknowledgement

This work is supported by the NSFC 61672089, 61273274, 61572064, and National Key Technology R&D Program of China 2012BAH01F03.

References

  1. 1.
    Loke, L., Larssen, A.T., Robertson, T.: Labanotation for design of movement-based interaction. In: Australasian Conference on Interactive Entertainment (2005)Google Scholar
  2. 2.
    Guest, A.H.: Labanotation: The System of Analyzing and Recording Movement. Psychology Press, London (2014)Google Scholar
  3. 3.
    Venable, L.: Archives of the dance: Labanwriter: there had to be a better way. Dance Res. J. Soc. Dance Res. 9(2), 76–88 (1991)CrossRefGoogle Scholar
  4. 4.
    Edward, F.: LED&LINTEL: a windows mini-editor and interpreter for Labanotation [EB/OL]. http://donhe.topcities.com/pubs/led.heml. Accessed 16 Nov 2016
  5. 5.
    Misi, G.: Labanatory [EB/OL]. http://labanatory.com/eng/software.html. Accessed 16 Nov 2016
  6. 6.
    Hachimura, K.: Digital archiving of dance by using motion-capture technology. In: New Directions in Digital Humanities for Japanese Arts and Cultures, pp. 167–182. Nakanishiya Publishing, Kyoto (2008)Google Scholar
  7. 7.
    Hachimura, K., Nakamura, M.: Method of generating coded description of human body motion from motion-captured data. In: 10th IEEE International Workshop on Robot & Human Interactive Communication, pp. 122–127. IEEE (2001)Google Scholar
  8. 8.
    Guo, H., Miao, Z., Zhu, F., Zhang, G., Li, S.: Automatic labanotation generation based on human motion capture data. In: Li, S., Liu, C., Wang, Y. (eds.) CCPR 2014. CCIS, vol. 483, pp. 426–435. Springer, Heidelberg (2014).  https://doi.org/10.1007/978-3-662-45646-0_44CrossRefGoogle Scholar
  9. 9.
    Choensawat, W., Nakamura, M., Hachimura, K.: GenLaban: a tool for generating labanotation from motion capture data. Multimedia Tools Appl. 74(23), 10823–10846 (2015)CrossRefGoogle Scholar
  10. 10.
    Zhou, Z., Miao, Z., Wang, J.: A system for automatic generation of Labanotation from motion capture data. In: 13th IEEE International Conference of Signal Process, pp. 1031–1034. IEEE (2016)Google Scholar
  11. 11.
    Li, M., Miao, Z.: Automatic Labanotation generation from motion-captured data based on hidden Markov models. In: 4th Asia Conference of Pattern Recognition (2017)Google Scholar
  12. 12.
    Zhang, X., Miao, Z., Zhang, Q., Wang, J.: Skeleton-based automatic generation of Labanotation with neural networks. J. Electron. Imaging 28(2), 23–26 (2019)Google Scholar
  13. 13.
    Li, C., Wang, P., Wang, S., Hou, Y., Li, W.: Skeleton-based action recognition using LSTM and CNN. In: IEEE International Conference on Multimedia & Expo Workshops (2017)Google Scholar
  14. 14.
    Zhang, S., et al.: Fusing geometric features for skeleton-based action recognition using multilayer LSTM networks. IEEE Trans. Multimedia 20(9), 2330–2343 (2018)CrossRefGoogle Scholar
  15. 15.
    Huang, Z., Wan, C., Probst, T., Van Gool, L.: Deep learning on lie groups for skeleton-based action recognition. In: Computer Vision & Pattern Recognition (2016)Google Scholar
  16. 16.
    He, K., Zhang, X., Ren, S., Sun, J.: Identity mappings in deep residual networks. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016, Part IV. LNCS, vol. 9908, pp. 630–645. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46493-0_38CrossRefGoogle Scholar
  17. 17.
    Vemulapalli, R., Chellappa, R.: Rolling rotations for recognizing human actions from 3D skeletal data. In: The IEEE Conference on Computer Vision and Pattern Recognition (2016)Google Scholar
  18. 18.
    Wang, J., Miao, Z.: A method of automatically generating Labanotation from human motion capture data. In: International Conference on Pattern Recognition, pp. 845–859 (2018)Google Scholar
  19. 19.
    NaturalPoint Corporation: OptiTrack Documentation Center [EB/OL]. http://wiki.optitrack.com/. Accessed 16 Nov 2016
  20. 20.
    Meredith, M., Maddock, S.: Motion capture file formats explained. Department of Computer Science, University of Sheffield (2001)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.School of Computer and Information Technology, Institute of Information ScienceBeijing Jiaotong UniversityBeijingChina

Personalised recommendations