Activity Discovery Using Compressed Suffix Trees

  • Prithwijit Guha
  • Amitabha Mukerjee
  • K. S. Venkatesh
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6979)

Abstract

The area of unsupervised activity categorization in computer vision is much less explored compared to the general practice of supervised learning of activity patterns. Recent works in the lines of activity “discovery” have proposed the use of probabilistic suffix trees (PST) and its variants which learn the activity models from temporally ordered sequences of object states. Such sequences often contain lots of object-state self-transitions resulting in a large number of PST nodes in the learned activity models. We propose an alternative method of mining these sequences by avoiding to learn the self-transitions while maintaining the useful statistical properties of the sequences thereby forming a “compressed suffix tree” (CST). We show that, on arbitrary sequences with significant self-transitions, the CST achieves a much lesser size as compared to the polynomial growth of the PST. We further propose a distance metric between the CSTs using which, the learned activity models are categorized using hierarchical agglomerative clustering. CSTs learned from object trajectories extracted from two data sets are clustered for experimental verification of activity discovery.

Keywords

Hide Markov Model Hierarchical Agglomerative Cluster Symbol Sequence Multiple Object Tracking Object Trajectory 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Moeslund, T., Hilton, A., Kruger, V.: A survey of advances in vision-based human motion capture and analysis. Computer Vision and Image Understanding 104, 90–126 (2006)CrossRefGoogle Scholar
  2. 2.
    Galata, A., Johnson, N., Hogg, D.: Learning variable-length markov models of behavior. Computer Vision and Image Understanding 81, 398–413 (2001)CrossRefMATHGoogle Scholar
  3. 3.
    Gao, J., Hauptmann, A.G., Bharucha, A., Wactlar, H.D.: Dining activity analysis using a hidden markov model. In: IEEE International Conference on Pattern Recognition, pp. 915–918 (2004)Google Scholar
  4. 4.
    Bobick, A., Wilson, A.: A state-based technique for summarization and recognition of gesture. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 382–388 (1995)Google Scholar
  5. 5.
    Brand, M., Oliver, N., Pentland, A.: Coupled hidden markov models for complex action recognition. In: Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition, pp. 1–6 (1997)Google Scholar
  6. 6.
    Johnson, N., Hogg, D.: Learning the distribution of object trajectories for event recognition. In: British Machine Vision Conference, pp. 583–592 (1995)Google Scholar
  7. 7.
    Buxton, H.: Learning and understanding dynamic scene activity: a review. Image and Vision Computing 21, 125–136 (2003)CrossRefGoogle Scholar
  8. 8.
    Veeraraghavan, H., Papanikolopoulos, N., Schrater, P.: Learning dynamic event descriptions in image sequences. In: IEEE International Conference on Computer Vision and Pattern Recognition (2007)Google Scholar
  9. 9.
    Hamid, R., Maddi, S., Bobick, A., Essa, M.: Structure from statistics - unsupervised activity analysis using suffix trees. In: IEEE International Conference on Computer Vision (2007)Google Scholar
  10. 10.
    Guha, P., Mukerjee, A., Venkatesh, K.S.: Efficient occlusion handling for multiple agent tracking with surveillance event primitives. In: Second Joint IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance (2005)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Prithwijit Guha
    • 1
  • Amitabha Mukerjee
    • 2
  • K. S. Venkatesh
    • 2
  1. 1.TCS Innovation LabsNew DelhiIndia
  2. 2.Indian Institute of TechnologyKanpurIndia

Personalised recommendations