Advertisement

A Framework for High-Level Feedback to Adaptive, Per-Pixel, Mixture-of-Gaussian Background Models

  • Michael Harville
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2352)

Abstract

Time-Adaptive, Per-Pixel Mixtures Of Gaussians (TAPP-MOGs) have recently become a popular choice for robust modeling and removal of complex and changing backgrounds at the pixel level. However, TAPPMOG-based methods cannot easily be made to model dynamic backgrounds with highly complex appearance, or to adapt promptly to sudden “uninteresting” scene changes such as the repositioning of a static object or the turning on of a light, without further undermining their ability to segment foreground objects, such as people, where they occlude the background for too long. To alleviate tradeoffs such as these, and, more broadly, to allow TAPPMOG segmentation results to be tailored to the specific needs of an application, we introduce a general framework for guiding pixel-level TAPPMOG evolution with feedback from “high-level” modules. Each such module can use pixel-wise maps of positive and negative feedback to attempt to impress upon the TAPPMOG some definition of foreground that is best expressed through “higher-level” primitives such as image region properties or semantics of objects and events. By pooling the foreground error corrections of many high-level modules into a shared, pixel-level TAPPMOG model in this way, we improve the quality of the foreground segmentation and the performance of all modules that make use of it. We show an example of using this framework with a TAPPMOG method and high-level modules that all rely on dense depth data from a stereo camera.

Keywords

Illumination Change Foreground Object Stereo Camera Foreground Pixel Observation History 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    D. Beymer. “Person Counting using Stereo”. In Wkshp. on Human Motion, 2000.Google Scholar
  2. 2.
    K. Bhat, M. Saptharishi, P. Khosla. “Motion Detection and Segmentation Using Image Mosaics”. In IEEE Intl. Conf. on Multimedia and Expo 2000, Aug. 2000.Google Scholar
  3. 3.
    T. Darrell, D. Demirdjian, N. Checka, P. Felzenszwalb. “Plan-view Trajectory Estimation with Dense Stereo Background Models”. In ICCV’01, July 2001.Google Scholar
  4. 4.
    A. Elgammal, D. Harwood, L. Davis. “Non-Parametric Model for Background Subtraction”. In ICCV Frame-rate Workshop, Sep 1999.Google Scholar
  5. 5.
    N. Friedman, S. Russell. “Image Segmentation in Video Sequences: a Probabilistic Approach”. In 13th Conf. on Uncertainty in Artificial Intelligence, August 1997.Google Scholar
  6. 6.
    X. Gao, T. Boult, F. Coetzee, V. Ramesh. “Error Analysis of Background Adaptation”. In CVPR’00, June 2000.Google Scholar
  7. 7.
    M. Harville. “Stereo Person Tracking with Adaptive Plan-View Statistical Templates”. HP Labs Technical Report, April 2002.Google Scholar
  8. 8.
    M. Harville, G. Gordon, J. Woodfill. “Foreground Segmentation Using Adaptive Mixture Models in Color and Depth”. In Proc. of IEEE Workshop on Detection and Recognition of Events in Video, July 2001.Google Scholar
  9. 9.
    K. Konolige. “Small Vision Systems: Hardware and Implementation”. 8th Int. Symp. on Robotics Research, 1997.Google Scholar
  10. 10.
    A. Mittal, D. Huttenlocher. “Scene Modeling for Wide Area Surveillance and Image Synthesis”. In CVPR’00, June 2000.Google Scholar
  11. 11.
    H. Nakai. “Non-Parameterized Bayes Decision Method for Moving Object Detection”. In ACCV’95, 1995.Google Scholar
  12. 12.
    J. Ng, S. Gong. “Learning Pixel-Wise Signal Energy for Understanding Semantics”. In Proc. Brit. Mach. Vis. Conf., Sep. 2001.Google Scholar
  13. 13.
    J. Orwell, P. Remagnino, G. Jones. “From Connected Components to Object Sequences”. In Wkshp. on Perf. Evaluation of Tracking and Surveillance, April 2000.Google Scholar
  14. 14.
    Point Grey Research, http://www.ptgrey.com
  15. 15.
    S. Rowe, A. Blake. “Statistical Background Modeling for Tracking with a Virtual Camera”. In Proc. Brit. Mach. Vis. Conf., 1995.Google Scholar
  16. 16.
    C. Stauffer, W.E.L. Grimson. “Adaptive Background Mixture Models for Real-Time Tracking”. In CVPR’99, Vol. 2, pp. 246–252, June 1999.Google Scholar
  17. 17.
    K. Toyama, J. Krumm, B. Brumitt, B. Meyers. “Wallflower: Principles and Practice of Baczkground Maintenance”. In ICCV’99, pp. 255–261, Sept 1999.Google Scholar
  18. 18.
    Tyzx Inc. “Real-time Stereo Vision for Real-world Object Tracking”. Tyzx White Paper, April 2000. http://www.tyzx.com
  19. 19.
    J. Woodfill, B. Von Herzen. “Real-Time Stereo Vision on the PARTS Reconfigurable Computer”. Symposium on Field-Programmable Custom Computing Machines, April 1997.Google Scholar
  20. 20.
    M. Xu, T. Ellis. “Illumination-Invariant Motion Detection Using Colour Mixture Models”. In Proc. Brit. Mach. Vis. Conf., Sep. 2001.Google Scholar
  21. 21.
    R. Zabih, J. Woodfill. “Non-parametric Local Transforms for Computing Visual Correspondence”. In ECCV’94, 1994.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2002

Authors and Affiliations

  • Michael Harville
    • 1
  1. 1.Hewlett-Packard LaboratoriesPalo AltoUSA

Personalised recommendations