Advertisement

Foreground Regions Extraction and Characterization Towards Real-Time Object Tracking

  • José Luis Landabaso
  • Montse Pardàs
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3869)

Abstract

Object localization and tracking are key issues in the analysis of scenes for video surveillance or scene understanding applications. This paper presents a contribution to the object tracking task in indoor environments surveyed by multiple fixed cameras. The method proposed uses a foreground separation process at each camera view. Then, a 3D-foreground scene is modeled and discretized into voxels making use of all the segmented views, preventing the difficulties of inter-object occlusions in 2D trackers, and increasing the robustness for not having to rely only in one view. The voxels are grouped into meaningful blobs, whose colors are modeled for tracking purposes, using a novel voxel-coloring technique that considers possible inter/intra-object occlusions. Finally, color information together with other characteristic features of 3D object appearances are temporally tracked using a template-based technique which takes into account all the features simultaneously in accordance with their respective variances. Extensive experiments dealing with several hours of video sequences in real-world scenarios have been conducted, showing a very promising performance.

Keywords

Video Surveillance Camera View Foreground Pixel Foreground Segmentation Voxel Coloring 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Black, J., Ellis, T., Rosin, P.: Multi view image surveillance and tracking. In: Proceedings of the Workshop on Motion and Video Computing (2002)Google Scholar
  2. 2.
    Hartley, R., Zisserman, A.: Multiple view geometry in computer vision. Cambridge University Press, Cambridge (2000)zbMATHGoogle Scholar
  3. 3.
    Zhang, Z.: A flexible new technique for camera calibration. Technical report, Microsoft Research (August. 2002)Google Scholar
  4. 4.
    Landabaso, J.L., Xu, L.-Q., Pardàs, M.: Robust Tracking and Object Classification Towards Automated Video Surveillance. Proceedings of ICIAR 2, 463–470 (2004)Google Scholar
  5. 5.
    Xu, L.-Q., Landabaso, J.L., Pardàs, M.: Shadow removal with blob-based morphological reconstruction for error correction. In: Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2005), March 18-23, vol. 2, pp. 729–732 (2005)Google Scholar
  6. 6.
    Landabaso, J.L., Pardàs, M., Xu, L.-Q.: Hierarchical representation of scenes using activity information. In: Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2005), March 18-23, vol. 2, pp. 677–680 (2005)Google Scholar
  7. 7.
    Stauffer, C., Grimson, W.E.L.: Learning patterns of activity using real-time tracking. IEEE trans. on Pattern Analysis and Machine Intelligence 22(8) (August 2000 )Google Scholar
  8. 8.
    Horpraset, T., Harwood, D., Davis, L.: A statistical approach for real-time robust background subtraction and shadow detection. In: Proceedings of International Conference on Computer Vision (1999)Google Scholar
  9. 9.
    CHIL project home page, http://chil.server.de

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • José Luis Landabaso
    • 1
  • Montse Pardàs
    • 1
  1. 1.Technical University of CatalunyaBarcelonaSpain

Personalised recommendations