Advertisement

Hidden Markov Model for Action Recognition Using Joint Angle Acceleration

  • Sha Huang
  • Liqing Zhang
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8228)

Abstract

This paper proposes a recognition method of human actions in video by adding new features, the joint angle acceleration to the feature space. In this method, human body is described as three-dimensional skeletons. The features consist of vectors of several important joint angles on the human body and the joint angle accelerations are also considered as a part of features. Hidden Markov Model (HMM) is used as classification scheme. The HMM models are trained by sequences extracted from the CMU graphics lab motion capture database. This method is invariant to scale, coordinate system and transition. A system is implemented to recognize 4 different types of actions (walk, run, jump and jumping jack) both on the dataset of CMU and Weizmann[7]. Each video clip contains a single action type. The experimental results show excellent performance of the proposed approach. A maximum 10.3% accuracy gain can be achieved by our method compared with the method without considering acceleration.

Keywords

Human action recognition skeleton joint angle HMM angle acceleration 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    Black, M.J.: Explaining Optical Flow Events with Parameterized Spatio-Temporal Models. In: Computer Vision and Pattern Recognition, vol. 1, pp. 1326–1332 (1999)Google Scholar
  2. [2]
    Efros, A.A., Berg, A.C., Mori, G., Malik, J.: Recognizing Action at a Distance. In: Proceedings of the Ninth IEEE International Conference on Computer Vision, pp. 726–733. IEEE (2003)Google Scholar
  3. [3]
    Chomat, O., Martin, J., Crowley, J.L.: A Probabilistic Sensor for the Perception and the Recognition of Activities. In: Vernon, D. (ed.) ECCV 2000. LNCS, vol. 1842, pp. 487–503. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  4. [4]
    Zelnik-Manor, L., Irani, M.: Event-Based Analysis of Video. In: Computer Vision and Pattern Recognition, pp. 123–130 (2001)Google Scholar
  5. [5]
    Chen, H.S., Chen, H.T., Chen, Y.W., et al.: Human action recognition using star skeleton. In: Proceedings of the 4th ACM International Workshop on Video Surveillance and Sensor Networks, pp. 171–178. ACM (2006)Google Scholar
  6. [6]
    Fujiyoshi, H., Lipton, A.J.: Real-Time Human Motion Analysis by Image Skeletonization. In: Proceedings of the Fourth IEEE Workshop on Applications of Computer Vision, pp. 15–21 (1998)Google Scholar
  7. [7]
    Blank, M., Gorelick, L., Shechtman, E., et al.: Actions as space-time shapes. In: Tenth IEEE International Conference on Computer Vision, ICCV 2005, vol. 2, pp. 1395–1402. IEEE (2005)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Sha Huang
    • 1
  • Liqing Zhang
    • 1
  1. 1.MOE-Microsoft Key Laboratory for Intelligent Computing and Intelligent Systems, Department of Computer Science and EngineeringShanghai Jiao Tong UniversityShanghaiChina

Personalised recommendations