Gait Recognition from Front and Back View Sequences Captured Using Kinect

  • Pratik Chattopadhyay
  • Shamik Sural
  • Jayanta Mukherjee
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8251)


In this paper, we propose a key pose based gait recognition approach using skeleton joint information derived from the depth data of Kinect. We consider situations where such depth cameras are mounted on top of entry and exit points, respectively capturing back and front views of subjects who enter a zone under surveillance. Three dimensional geometric transformations are used to map the skeleton images captured from the back view to an equivalent front view. A gait cycle is divided into a number of key poses and the trajectory followed by each skeleton joint within a key pose is used to derive the gait features for that particular pose. For recognizing a subject, available key poses are compared with the corresponding key poses of the training subjects. The proposed method has higher accuracy than other competing approaches.


Gait Recognition Kinect Geometric Transformation Skeleton Joint Key Pose 


  1. 1.
    Han, J., Bhanu, B.: Individual Recognition using Gait Energy Image. IEEE Trans. on Pattern Analysis and Machine Intel. 28(2), 316–322 (2006)CrossRefGoogle Scholar
  2. 2.
    Roy, A., et al.: Gait Recognition using Pose Kinematics and Pose Energy Image. Signal Processing 92(3), 780–792 (2012)CrossRefGoogle Scholar
  3. 3.
    Sivapalan, S., et al.: Gait Energy Volumes and Frontal Gait Recognition using Depth Images. In: International Joint Conf. on Biometrics, pp. 1–6 (2011)Google Scholar
  4. 4.
    Soriano, M., et al.: Curve Spreads - A Biometric From Front View Gait Video. Pattern Recognition Letters 25(14), 1595–1602 (2004)CrossRefGoogle Scholar
  5. 5.
    Barnich, O., et al.: Frontal-View Gait Recognition by Intra- and Inter-Frame Rectangle Size Distribution. Pattern Recognition Letters 30(10), 893–901 (2009)CrossRefGoogle Scholar
  6. 6.
    Lee, K., et al.: Frontal View-Based Gait Identification using Largest Lyapunov Exponents. In: Proc. of the Intl. Conf. on Accoustics, Speech, and Signal Processing, pp. 173–176 (2006)Google Scholar
  7. 7.
    Gofferedo, M., et al.: Front-View Gait Recognition. In: Proc. of the IEEE Intl. Conf. on Biometrics: Theory, Applications and Systems, pp. 1–6 (2011)Google Scholar
  8. 8.
    Ryu, J., Kamata, S.: Front View Gait Recognition using Spherical Space Model with Human Point Clouds. In: Eighteenth IEEE International Conf. on Image Processing, pp. 3209–3212 (2011)Google Scholar
  9. 9.
    Zhang, Z.: Microsoft Kinect Sensor and its Effect. IEEE Multimedia 19(2), 4–10 (2012)CrossRefGoogle Scholar
  10. 10.
    Chattopadhyay, P., et al.: Pose Depth Volume Extraction from RGB-D Streams for Frontal Gait Recognition. Journal of Visual Communication and Image Representation (2013), doi:10.1016/j.jvcir.2013.02.010Google Scholar
  11. 11.
    Shotton, J., et al.: Real-Time Human Pose Recognition in Parts from Single Depth Images. In: IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pp. 1297–1304 (2011)Google Scholar
  12. 12.
    Kumar, M.S., Babu, R.V.: Human Gait Recognition using Depth Camera: A Covariance Based Approach. In: Proc. of the Eighth Indian Conference on Computer Vision, Graphics and Image Processing, Art., vol. 20 (2012)Google Scholar
  13. 13.
    Jin, X., Han, J.: K-Means Clustering. In: Encyclopedia of Machine Learning, pp. 563–564. Springer, US (2010)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Pratik Chattopadhyay
    • 1
  • Shamik Sural
    • 1
  • Jayanta Mukherjee
    • 2
  1. 1.School of Information TechnologyIIT KharagpurIndia
  2. 2.Dept. of Computer Science & EngineeringIIT KharagpurIndia

Personalised recommendations