Advertisement

Region-Oriented Visual Attention Framework for Activity Detection

  • Thomas Geerinck
  • Hichem Sahli
Conference paper
  • 1.2k Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4840)

Abstract

This paper proposes a framework, based on a spatio-temporal attentive mechanism, for automatic region-of-interest determination, corresponding to events in video sequences of natural scenes of dynamic environments. We view this work as a preliminary step towards the solution of high-level semantic event analysis. More specifically, we wish to detect a visual event within a cluttered scene, without intensive training algorithms. In contrast to event detection methods used in the literature, which drive attention based on motion and spatial location hypothesis, in our approach the visual attention is region-driven as well as feature-driven. For this purpose, a two stages attention mechanism is proposed. In a first phase, spatio-temporal activity analysis extracts key-frames from the image sequence and selects salient areas within these frames. The three types of visual attention features are used, namely, intensity, color and motion. Consequently, the selected areas are further processed to determine the most active region, based on a newly defined region saliency measure. Qualitative and quantitative results, using the proposed framework, are illustrated envisaging the application domain of change detection in automated visual surveillance.

Keywords

Event detection activity measure visual attention region-oriented 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Varadharajan, C.: A Wavelet-Based System for Event Detection in Online Real-time Sensor Data. Massachusetts Institute of Technology (2004)Google Scholar
  2. 2.
    Gaborski, R.S., Vaingankar, V.S., Chaoji, V.S., Teredesai, A.M.: A System for Novelty Detection in Video Streams with Learning. Laboratory for Applied Computing, Rochester Institute of Technology, Rochester, NY, USA (2004)Google Scholar
  3. 3.
    Tentler, A., Vaingakar, V.S., Gaborski, R.S., Teredesai, A.M.: Event detection in video sequences of natural scenes. Rochester Institute of Technology, Laboratory for Applied Computing (2002)Google Scholar
  4. 4.
    Tsotsos, J.K.: Distributed Saliency Computations Solve the Feature Binding Problem. In: Proc. ECCV WAPCV, Prague (May 15, 2004)Google Scholar
  5. 5.
    Itti, L.: Models of Bottom-Up and Top-Down Visual Attention. Ph.D. Thesis, California Institute of Technology (2000)Google Scholar
  6. 6.
    Tsotsos, J.K.: Motion Understanding: Task-Directed Attention and Representations that link Perception with Action. International Journal of Computer Vision 45(3), 265–280 (2001)CrossRefzbMATHGoogle Scholar
  7. 7.
    Rapantzikos, K., Avrithis, Y., Kollias, S.: On the use of spatiotemporal visual attention for video classification. In: VLBV 2001. Proc. of Int. Workshop on Very Low Bitrate Video Coding (2005)Google Scholar
  8. 8.
    Peker, K.A., Alatan, A.A., Akansu, A.N.: Low-level motion activity features for semantic characterization of video. ICME 2000 2, 801–804 (2000)Google Scholar
  9. 9.
    Hu, Y., Xie, X., Ma, W-Y., Chia, L-T., Rajan, D.: Salient region detection using weighted feature maps based on the human visual attention model. In: Aizawa, K., Nakamura, Y., Satoh, S. (eds.) PCM 2004. LNCS, vol. 3331, Springer, Heidelberg (2004)CrossRefGoogle Scholar
  10. 10.
    Rapantzikos, K., Tsapatsoulis, N.: Enhancing the robustness of skin-based face detection schemes through a visual attention architecture. ICIP 2005 II, 1298–1301 (2005)Google Scholar
  11. 11.
    Makrogiannis, S.K., Bourbakis, N.G.: Motion analysis with application to assistive vision technology. In: Makrogiannis, S.K., Bourbakis, N.G. (eds.) ICTAI 2004. 16th IEEE International Conference on Tools with Artificial Intelligence, pp. 344–352 (2004)Google Scholar
  12. 12.
    Lucas, B.D., Kanade, T.: An iterative image registration technique with an application to stereo vision. In: Lucas, B.D., Kanade, T. (eds.) International Joint Conference on Artificial Intelligence, pp. 674–679 (1981)Google Scholar
  13. 13.
    Manjunath, B.S., Ohm, J-R., Vasudevan, V.V., Yamada, A.: Color and texture descriptors. IEEE Transactions On Circuits And Systems For Video Technology 11(6), 703–715 (2001)CrossRefGoogle Scholar
  14. 14.
    Smith, J.R., Chang, S-F.: Tools and Techniques for Color Image Retrieval. Storage and Retrieval for Image and Video Databases (SPIE) , 426–437 (1996)Google Scholar
  15. 15.
    Tsotsos, J.K., Culhane, S., Wai, W., Lai, Y., Davis, N., Nuflo, F.: Modeling visual attention via selective tuning. Artifical Intelligence 78(1-2), 507–547 (1995)CrossRefGoogle Scholar
  16. 16.
    Guralnik, V., Srivastava, J.: Event detection from time series data. In: Guralnik, V., Srivastava, J. (eds.) KDD 1999. Proceedings of the fifth ACM SIGKDD international conference on Knowledge discovery and data mining, San Diego, California, United States, pp. 33–42 (1999)Google Scholar
  17. 17.
    Cherkassky, V., Mulier, F.: Learning from Data. Wiley-Interscience, New York, NY, USA (1998)zbMATHGoogle Scholar
  18. 18.
    Meyer, F.: An Overview of Morphological Segmentation. IJPRAI 15(7), 1089–1118 (2001)Google Scholar
  19. 19.
    Vanhamel, I., Pratikakis, I., Sahli, H.: Multiscale gradient watersheds of color images. IEEE Transactions on Image Processing 12(6), 617–626 (2003)CrossRefzbMATHGoogle Scholar
  20. 20.
    O’Callaghan, R.J., Bull, D.R.: Combined morphological-spectral unsupervised image segmentation. IP 14(1), 49–62 (2005)Google Scholar
  21. 21.
    Marcotegui, B., Beucher, S.: Fast implementation of waterfall based on graphs. In: Ronse, C., Najman, L., Decenciere, E. (eds.) Mathematical morphology: 40 years on. Proceedings of the 7th international symposium on mathematical morphology. Computational imaging and vision, vol. 30, pp. 177–186 (2005)Google Scholar
  22. 22.
    Cheng, H-D., Sun, Y.: A Hierarchical approach to color image segmentation using homogeneity. IEEE Transactions on Image Processing 9(12), 2071–2082 (2000)CrossRefGoogle Scholar
  23. 23.
    Sun, Y., Fisher, R.: Object-based Visual Attention for Computer Vision. Artificial Intelligence , 77–123 (2003)Google Scholar
  24. 24.

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Thomas Geerinck
    • 1
  • Hichem Sahli
    • 1
  1. 1.Electronics & Informatics Department - VUB-ETRO, Vrije Universiteit Brussel (VUB), Interdisciplinary Institute for BroadBand Technology (IBBT), BrusselsBelgium

Personalised recommendations