Background Modeling via Incremental Maximum Margin Criterion

  • Cristina Marghes
  • Thierry Bouwman
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6469)


Subspace learning methods are widely used in background modeling to tackle illumination changes. Their main advantage is that it doesn’t need to label data during the training and running phase. Recently, White et al. [1] have shown that a supervised approach can improved significantly the robustness in background modeling. Following this idea, we propose to model the background via a supervised subspace learning called Incremental Maximum Margin Criterion (IMMC). The proposed scheme enables to initialize robustly the background and to update incrementally the eigenvectors and eigenvalues. Experimental results made on the Wallflower datasets show the pertinence of the proposed approach.


Principal Component Analysis Linear Discriminant Analysis Independent Component Analysis Background Modeling Independent Component Analysis 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    White, B., Shah, M.: Automatically tuning background subtraction parameters using particle swarm optimization. In: ICME 2007, pp. 1826–1829 (2007)Google Scholar
  2. 2.
    Elhabian, S., El-Sayed, K., Ahmed, S.: Moving object detection in spatial domain using background removal techniques - state-of-art. In: RPCS, vol. 1, pp. 32–54 (January 2008)Google Scholar
  3. 3.
    Bouwmans, T., Baf, F.E., Vachon, B.: Statistical background modeling for foreground detection: A survey. In: Handbook of Pattern Recognition and Computer Vision, vol. 4, pp. 181–189. World Scientific Publishing (2010)Google Scholar
  4. 4.
    Bouwmans, T., Baf, F.E., Vachon, B.: Background modeling using mixture of gaussians for foreground detection: A survey. In: RPCS, vol. 1, pp. 219–237 (November 2008)Google Scholar
  5. 5.
    Bouwmans, T.: Subspace learning for background modeling: A survey. In: RPCS, vol. 2, pp. 223–234 (November 2009)Google Scholar
  6. 6.
    Oliver, N., Rosario, B., Pentland, A.: A bayesian computer vision system for modeling human interactions. In: ICVS 1999 (January 1999)Google Scholar
  7. 7.
    Rymel, J., Renno, J., Greenhill, D., Orwell, J., Jones, G.: Adaptive eigen-backgrounds for object detection. In: ICIP 2004, pp. 1847–1850 (October 2004)Google Scholar
  8. 8.
    Li, Y., Xu, L., Morphett, J., Jacobs, R.: An integrated algorithm of incremental and robust pca. In: ICIP 2003, pp. 245–248 (September 2003)Google Scholar
  9. 9.
    Skocaj, D., Leonardis, A.: Weighted and robust incremental method for subspace learning. In: ICCV 2003, pp. 1494–1501 (2003)Google Scholar
  10. 10.
    Zhang, J., Zhuang, Y.: Adaptive weight selection for incremental eigen-background modeling. In: ICME 2007, pp. 851–854 (July 2007)Google Scholar
  11. 11.
    Wang, L., Wang, L., Zhuo, Q., Xiao, H., Wang, W.: Adaptive eigenbackground for dynamic background modeling. LNCS, vol. 2006, pp. 670–675 (2006)Google Scholar
  12. 12.
    Zhang, J., Tian, Y., Yang, Y., Zhu, C.: Robust foreground segmentation using subspace based background model. In: APCIP 2009, vol. 2, pp. 214–217 (July 2009) Google Scholar
  13. 13.
    Li, R., Chen, Y., Zhang, X.: Fast robust eigen-background updating for foreground detection. In: ICIP 2006, pp. 1833–1836 (2006)Google Scholar
  14. 14.
    Yamazaki, M., Xu, G., Chen, Y.: Detection of moving objects by independent component analysis. In: Narayanan, P.J., Nayar, S.K., Shum, H.-Y. (eds.) ACCV 2006. LNCS, vol. 3852, pp. 467–478. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  15. 15.
    Tsai, D., Lai, C.: Independent component analysis-based background subtraction for indoor surveillance. IEEE Transactions on Image Processing, IP 2009 8, 158–167 (2009)MathSciNetCrossRefGoogle Scholar
  16. 16.
    Chu, Y., Wu, X., Sun, W., Liu, T.: A basis-background subtraction method using non-negative matrix factorization. In: International Conference on Digital Image Processing, ICDIP 2010 (2010)Google Scholar
  17. 17.
    Bucak, S., Gunsel, B.: Incremental subspace learning and generating sparse representations via non-negative matrix factorization. Pattern Recognition 42, 788–797 (2009)CrossRefzbMATHGoogle Scholar
  18. 18.
    Li, X., Hu, W., Zhang, Z., Zhang, X.: Robust foreground segmentation based on two effective background models. In: MIR 2008, pp. 223–228 (October 2008)Google Scholar
  19. 19.
    Stauffer, C., Grimson, W.: Adaptive background mixture models for real-time tracking. In: CVPR 1999, pp. 246–252 (1999)Google Scholar
  20. 20.
    Li, H., Jiang, T., Zhang, K.: Efficient and robust feature extraction by maximum margin criterion. Advances in Neural Information Processing Systems vol. 16 (2004)Google Scholar
  21. 21.
    Wang, F., Zhang, C.: Feature extraction by maximizing the average neighborhood margin. In: CVPR 2007, pp. 1–8 (2007)Google Scholar
  22. 22.
    Yan, J., Zhang, B., Yan, S., Yang, Q., Li, H., Chen, Z.: Immc: incremental maximum margin criterion. In: KDD 2004, pp. 725–730 (August 2004)Google Scholar
  23. 23.
    Toyama, K., Krumm, J., Brumitt, B., Meyers, B.: Wallflower: Principles and practice of background maintenance. In: ICCV 1999, pp. 255–261 (September 1999)Google Scholar
  24. 24.
    Wren, C., Azarbayejani, A., Darrell, T., Pentland, A.: Pfinder: Real-time tracking of the human body. IEEE Transactions on PAMI 19, 780–785 (1997)CrossRefGoogle Scholar
  25. 25.
    Stauffer, C., Grimson, W.: Adaptive background mixture models for real-time tracking. In: CVPR 1999, pp. 246–252 (1999)Google Scholar
  26. 26.
    Elgammal, A., Harwood, D., Davis, L.: Non-parametric model for background subtraction. In: ECCV 2000, pp. 751–767 (June 2000)Google Scholar
  27. 27.
    Toyama, K., Krumm, J., Brumitt, B., Meyers, B.: Wallflower: Principles and practice of background maintenance. In: International Conference on Computer Vision, pp. 255–261 (September 1999)Google Scholar
  28. 28.
    Chen, D., Zhang, L.: An incremental linear discriminant analysis using fixed point method. In: Wang, J., Yi, Z., Żurada, J.M., Lu, B.-L., Yin, H. (eds.) ISNN 2006. LNCS, vol. 3971, pp. 1334–1339. Springer, Heidelberg (2006)CrossRefGoogle Scholar
  29. 29.
    Kim, T., Wong, S., Stenger, B., Kittler, J., Cipolla, R.: Incremental linear discriminant analysis using sufficient spanning set approximations. In: CVPR, pp. 1–8 (June 2007)Google Scholar
  30. 30.
    Rosipal, R., Krämer, N.C.: Overview and recent advances in partial least squares. In: Saunders, C., Grobelnik, M., Gunn, S., Shawe-Taylor, J. (eds.) SLSFS 2005. LNCS, vol. 3940, pp. 34–51. Springer, Heidelberg (2006)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Cristina Marghes
    • 1
  • Thierry Bouwman
    • 1
  1. 1.Laboratoire MIAUniversity of La RochelleLa RochelleFrance

Personalised recommendations