A One-Shot DTW-Based Method for Early Gesture Recognition

  • Yared Sabinas
  • Eduardo F. Morales
  • Hugo Jair Escalante
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8259)

Abstract

Early gesture recognition consists of recognizing gestures at their beginning, using incomplete information. Among other applications, these methods can be used to compensate for the delay of gesture-based interactive systems. We propose a new approach for early recognition of full-body gestures based on dynamic time warping (DTW) that uses a single example from each category. Our method is based on the comparison between time sequences obtained from known and unknown gestures. The classifier provides a response before the unknown gesture finishes. We performed experiments in the MSR-Actions3D benchmark and another data set we built. Results show that, in average, the classifier is capable of recognizing gestures with 60% of the information, losing only 7.29% of accuracy with respect to using all of the information.

Keywords

Early gesture recognition DTW one-shot learning Kinect 

References

  1. 1.
    Ellis, C., Masood, S.Z., Tappen, M.F., LaViola Jr, J., Sukthankar, R.: Exploring the trade-off between accuracy and observational latency in action recognition. International Journal of Computer Vision 101(3), 420–436 (2013)CrossRefGoogle Scholar
  2. 2.
    Guyon, I., Athitsos, V., Jangyodsuk, P., Escalante, H.J., Hamner, B.: Results and analysis of the chaLearn gesture challenge 2012. In: Jiang, X., Bellon, O.R.P., Goldgof, D., Oishi, T. (eds.) WDIA 2012. LNCS, vol. 7854, pp. 186–204. Springer, Heidelberg (2013)CrossRefGoogle Scholar
  3. 3.
    Kawashima, M., Shimada, A., Nagahara, H., Taniguchi, R.I.: Adaptive template method for early recognition of gestures. In: 17th WFCV, pp. 1–6. IEEE (2011)Google Scholar
  4. 4.
    Li, W., Zhang, Z., Liu, Z.: Action recognition based on a bag of 3d points. In: CVPRW, pp. 9–14. IEEE (2010)Google Scholar
  5. 5.
    Mitra, S.: Gesture recognition: A survey. Trans. on Syst. Man and Cyb. - C 37, 311–324 (2007)CrossRefGoogle Scholar
  6. 6.
    Mori, A., Uchida, S., Kurazume, R., Taniguchi, R.-I., Hasegawa, T., Sakoe, H.: Early recognition and prediction of gestures. In: ICPR, pp. 560–563 (2006)Google Scholar
  7. 7.
    Muller, M., Roder, T.: Motion templates for automatic classification and retrieval of motion capture data. In: Proc. SIGGRAPH-SAC, pp. 137–146 (2006)Google Scholar
  8. 8.
    Raptis, M., Kirovski, D., Hoppe, H.: Real-time classification of dance gestures from skeleton animation. In: SoCA, pp. 147–156. ACM (2011)Google Scholar
  9. 9.
    Shimada, A., Kawashima, M., Taniguchi, R.-I.: Early recognition based on co-occurrence of gesture patterns. In: Wong, K.W., Mendis, B.S.U., Bouzerdoum, A. (eds.) ICONIP 2010, Part II. LNCS, vol. 6444, pp. 431–438. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  10. 10.
    Wang, J., Liu, Z., Wu, Y., Yuan, J.: Mining actionlet ensemble for action recognition with depth cameras. In: CVPR, pp. 1290–1297. IEEE (2012)Google Scholar
  11. 11.
    Zhengyou, Z.: Microsoft kinect and its effect. IEEE MultiMedia 19, 4–10 (2012)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Yared Sabinas
    • 1
  • Eduardo F. Morales
    • 1
  • Hugo Jair Escalante
    • 1
  1. 1.Instituto Nacional de Astrofísica, Óptica y ElectrónicaPueblaMéxico

Personalised recommendations