Advertisement

Towards a Subject-Centered Analysis for Automated Video Surveillance

  • Michela Farenzena
  • Loris Bazzani
  • Vittorio Murino
  • Marco Cristani
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5716)

Abstract

In a typical video surveillance framework, a single camera or a set of cameras monitor a scene in which human activities are carried out. In this paper, we propose a complementary framework where human activities can be analyzed under a subjective point of view. The idea is to represent the focus of attention of a person in the form of a 3D view frustum, and to insert it in a 3D representation of the scene. This leads to novel inferences and reasoning on the scene and the people acting in it. As a particular application of this proposed framework, we collect the information from the subjective view frusta in an Interest Map, i.e. a map that gathers in an effective and intuitive way which parts of the scene are observed more often in a defined time interval. The experimental results on standard benchmark data witness the goodness of the proposed framework, encouraging further efforts for the development of novel applications in the same direction.

Keywords

Video Surveillance Observational Model Surveillance Scenario View Frustum Camera Projection Matrix 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Gatica-Perez, D.: Automatic nonverbal analysis of social interaction in small groups: a review. Image and Vision Computing, Special Issue on Human Naturalistic Behavior (accepted for publication)Google Scholar
  2. 2.
    Jayagopi, D., Hung, H., Yeo, C., Gatica-Perez, D.: Modeling dominance in group conversations from nonverbal activity cues. IEEE Trans. on Audio, Speech, and Language Processing, Special Issue on Multimodal Processing for Speech-based Interactions 3(3) (2009)Google Scholar
  3. 3.
    Paul, C., Oswald, L.: Optimised meeting recording and annotation using real-time video analysis. In: Popescu-Belis, A., Stiefelhagen, R. (eds.) MLMI 2008. LNCS, vol. 5237, pp. 50–61. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  4. 4.
    Smith, K., Ba, S.O., Odobez, J.M., Gatica-Perez, D.: Tracking the visual focus of attention for a varying number of wandering people. IEEE Transactions on Pattern Analysis and Machine Intelligence 30(7), 1–18 (2008)CrossRefGoogle Scholar
  5. 5.
    Lanz, O.: Approximate bayesian multibody tracking. IEEE Transactions on Pattern Analysis and Machine Intelligence 28(9), 1436–1449 (2006)CrossRefGoogle Scholar
  6. 6.
    Isard, M., Blake, A.: Condensation: Conditional density propagation for visual tracking. Int. J. of Computer Vision 29, 5–28 (1998)CrossRefGoogle Scholar
  7. 7.
    Isard, M., MacCormick, J.: Bramble: A bayesian multiple-blob tracker (2001)Google Scholar
  8. 8.
    Snavely, N., Seitz, S.M., Szeliski, R.: Photo tourism: exploring photo collections in 3D. In: SIGGRAPH Conference Proceedings, NY, USA, pp. 835–846 (2006)Google Scholar
  9. 9.
    Farenzena, M., Fusiello, A., Gherardi, R., Toldo, R.: Towards unsupervised reconstruction of architectural models. In: Proceedings of Vision, Modeling, and Visualization 2008, pp. 41–50 (2008)Google Scholar
  10. 10.
    Stiefelhagen, R., Finke, M., Yang, J., Waibel, A.: From gaze to focus of attention. In: Huijsmans, D.P., Smeulders, A.W.M. (eds.) VISUAL 1999. LNCS, vol. 1614, pp. 761–768. Springer, Heidelberg (1999)CrossRefGoogle Scholar
  11. 11.
    Preparata, F.P., Shamos, M.I.: Computational Geometry. An IntroductionGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2009

Authors and Affiliations

  • Michela Farenzena
    • 1
  • Loris Bazzani
    • 1
  • Vittorio Murino
    • 2
  • Marco Cristani
    • 2
  1. 1.Dipartimento di InformaticaUniversità di VeronaItaly
  2. 2.IITIstituto Italiano di TecnologiaGenovaItaly

Personalised recommendations