Advertisement

Learn to Move: Activity Specific Motion Models for Tracking by Detection

  • Thomas Mauthner
  • Peter M. Roth
  • Horst Bischof
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7585)

Abstract

In this paper, we focus on human activity detection, which solves detection, tracking, and recognition jointly. Existing approaches typically use off-the-shelf approaches for detection and tracking, ignoring naturally given prior knowledge. Hence, in this work we present a novel strategy for learning activity specific motion models by feature-to-temporal-displacement relationships. We propose a method based on an augmented version of canonical correlation analysis (AuCCA) for linking high-dimensional features to activity-specific spatial displacements over time. We compare this continuous and discriminative approach to other well established methods in the field of activity recognition and detection. In particular, we first improve activity detections by incorporating temporal forward and backward mappings for regularization of detections. Second, we extend a particle filter framework by using activity-specific motion proposals, allowing for drastically reducing the search space. To show these improvements, we run detailed evaluations on several benchmark data sets, clearly showing the advantages of our activity-specific motion models.

Keywords

Random Forest Motion Model Action Recognition Canonical Correlation Analysis Human Action Recognition 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Wang, H., Ullah, M.M., Laptev, A.K.I., Schmid, C.: Evaluation of local spatio-temporal features for action recognition. In: BMVC (2009)Google Scholar
  2. 2.
    Kovashka, A., Grauman, K.: Learning a hierarchy of discriminative space-time neighborhood features for human action recognition. In: CVPR (2010)Google Scholar
  3. 3.
    Gorelick, L., Blank, M., Shechtman, E., Irani, M., Basri, R.: Actions as space-time shapes. IEEE Trans. PAMI 29 (2007)Google Scholar
  4. 4.
    Rodriguez, M.D., Ahmed, J., Shah, M.: Action mach - a spatio-temporal maximum average correlation height filter for action recognition. In: CVPR (2008)Google Scholar
  5. 5.
    Lin, Z., Jiang, Z., Davis, L.S.: Recognizing actions by shape-motion prototype trees. In: ICCV (2009)Google Scholar
  6. 6.
    Burgos-Artizzu, X.P., Dollar, P., Lin, D., Anderson, D.J., Perona, P.: Social behavior recognition in continous video. In: CVPR (2012)Google Scholar
  7. 7.
    Yao, A., Gall, J., van Gool, L.: A hough transform-based voting framework for action recognition. In: CVPR (2010)Google Scholar
  8. 8.
    Khamis, S., Morariu, V.I., Davis, L.S.: A flow model for joint action recognition and identity maintenance. In: CVPR (2012)Google Scholar
  9. 9.
    Gall, J., Lempitsky, V.: Class-specific hough forests for object detection. In: CVPR (2009)Google Scholar
  10. 10.
    Hotelling, H.: Relations between two sets of variates. Biometrika 28, 321–377 (1936)zbMATHGoogle Scholar
  11. 11.
    Melzer, T., Reiter, M., Bischof, H.: Appearance models based on kernel canonical correlation analysis. Pattern Recognition 36, 1961–1971 (2003)zbMATHCrossRefGoogle Scholar
  12. 12.
    Breiman, L.: Random forests. Machine Learning (2001)Google Scholar
  13. 13.
    Bosch, A., Zisserman, A., Munoz, X.: Image classification using random forests and ferns. In: ICCV (2007)Google Scholar
  14. 14.
    Arulampalam, S., Maskell, S., Gordon, N., Clapp, T.: A tutorial on particle filters for on-line non-linear/non-gaussian bayesian tracking. IEEE Trans. Signal Processing 50, 174–188 (2002)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Thomas Mauthner
    • 1
  • Peter M. Roth
    • 1
  • Horst Bischof
    • 1
  1. 1.Inst. f. Computer Graphics and VisionGraz University of TechnologyAustria

Personalised recommendations