Learning from Mistakes: Object Movement Classification by the Boosted Features

  • Shigeyuki Odashima
  • Tomomasa Sato
  • Taketoshi Mori
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6468)


This paper proposes a robust object movement detection method via a classifier trained by mis-detection samples. The mis-detection are related to the environment, such as reflection on a display or small movement of a curtain, so learning the patterns of mis-detections will improve the detection precision. The mis-detections are expected to have several features, but selecting manually optimal features and thresholds is difficult. In order to acquire optimal classifier automatically, we employ a ensemble learning framework. The experiment shows the method can detect object movements sufficiently by constructing the classifier automatically by the proposed framework.


Object Movement Color Histogram Stable Change Object Candidate Object Movement Detection 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Gupta, A., Kembhavi, A., Davis, L.: Observing human-object interactions: using spatial and functional compatibility for recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence 31, 1775–1789 (2009)CrossRefGoogle Scholar
  2. 2.
    Maki, K., Shirai, N., Shirai, Y.: Interactive inquiry of indoor scene transition with awareness and automatic correction of mis-understanding. In: The 1st international workshop on video event categorization, tagging and retrieval, VECTaR 2009 (2009)Google Scholar
  3. 3.
    Odashima, S., Mori, T., Shimosaka, M., Noguchi, H., Sato, T.: Event understanding of human-object interaction: Object movement detection via stable changes. In: Zhang, J., Shao, L., Zhang, L., Jones, G.A. (eds.) Intelligent Video Event Analysis and Understanding. Studies in Computational Intelligence, vol. 332, pp. 195–210. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  4. 4.
    Kim, K., Chalidabhongse, T., Harwood, D., Davis, L.: Real-time foreground–background segmentation using codebook model. Real-Time Imaging 11, 172–185 (2005)CrossRefGoogle Scholar
  5. 5.
    Tian, Y., Lu, M., Hampapur, A.: Robust and efficient foreground analysis for real-time video surveillance. In: CVPR (2005)Google Scholar
  6. 6.
    Stauffer, C., Grimson, W.: Adaptive background mixture models for real-time tracking. In: CVPR (1999)Google Scholar
  7. 7.
    Viola, P., Jones, M.J.: Robust real-time face detection. IJCV 57, 137–154 (2004)CrossRefGoogle Scholar
  8. 8.
    Li, Y., Huang, C., Nevatia, R.: Learning to associate: Hybridboosted multi-target tracker for crowded scene. In: CVPR (2009)Google Scholar
  9. 9.
    Gehler, P., Nowozin, S.: On feature combination for multiclass object classification. In: ICCV (2009)Google Scholar
  10. 10.
    Boykov, Y., Kolmogorov, V.: An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision. IEEE Transactions on Pattern Analysis and Machine Intelligence 26, 1124–1137 (2004)CrossRefzbMATHGoogle Scholar
  11. 11.
    Pérez, P., Hue, C., Vermaak, J., Gangnet, M.: Color-based probabilistic tracking. In: Heyden, A., Sparr, G., Nielsen, M., Johansen, P. (eds.) ECCV 2002. LNCS, vol. 2350, pp. 661–675. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  12. 12.
    Özuysal, M., Calonder, M., Lepetit, V., Fua, P.: Fast keypoint recognition using random ferns. IEEE Transactions on Pattern Analysis and Machine Intelligence 32, 448–461 (2009)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Shigeyuki Odashima
    • 1
  • Tomomasa Sato
    • 1
  • Taketoshi Mori
    • 1
  1. 1.Graduate School of Information Science and TechnologyThe University of TokyoTokyoJapan

Personalised recommendations