Advertisement

Multistrategical Approach in Visual Learning

  • Hiroki Nomiya
  • Kuniaki Uehara
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4843)

Abstract

In this paper, we propose a novel visual learning framework to develop flexible and accurate object recognition methods. Currently, most of visual learning based recognition methods adopt the monostrategy learning framework using a single feature. However, the real-world objects are so complex that it is quite difficult for monostrategy method to correctly classify them. Thus, utilizing a wide variety of features is required to precisely distinguish them. In order to utilize various features, we propose multistrategical visual learning by integrating multiple visual learners. In our method, multiple visual learners are collaboratively trained. Specifically, a visual learner L intensively learns the examples misclassified by the other visual learners. Instead, the other visual learners learn the examples misclassified by L. As a result, a powerful object recognition method can be developed by integrating various visual learners even if they have mediocre recognition performance.

Keywords

Recognition Performance Recognition Accuracy Frequent Pattern Scale Invariant Feature Transform Base Learner 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Levin, A., Viola, P., Freund, Y.: Unsupervised improvement of visual detectors using co-training. In: Proc. of the 9th IEEE International Conference on Computer Vision, pp. 626–633. IEEE Computer Society Press, Los Alamitos (2003)CrossRefGoogle Scholar
  2. 2.
    Nomiya, H., Uehara, K.: Feature construction and feature integration in visual learning. In: Gama, J., Camacho, R., Brazdil, P.B., Jorge, A.M., Torgo, L. (eds.) ECML 2005. LNCS (LNAI), vol. 3720, pp. 86–95. Springer, Heidelberg (2005)Google Scholar
  3. 3.
    Freund, Y., Schapire, R.E.: A decision theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences 55(1), 119–139 (1997)zbMATHCrossRefMathSciNetGoogle Scholar
  4. 4.
    Nelson, R.C.: Finding line segments by stick growing. IEEE Transactions on Pattern Analysis and Machine Intelligence 16(5), 519–523 (1994)CrossRefGoogle Scholar
  5. 5.
    Zhang, D., Lu, G.: Enhanced generic fourier descriptors for object-based image retrieval. In: Proc. of IEEE International Conference on Acoustics, Speech, and Signal Processing, pp. 3668–3671. IEEE Computer Society Press, Los Alamitos (2002)Google Scholar
  6. 6.
    Leibe, B., Schiele, B.: Analyzing appearance and contour based methods for object categorization. In: Proc. of International Conference on Computer Vision and Pattern Recognition, pp. 409–415 (2003)Google Scholar
  7. 7.
    Lowe, D.: Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision 60(2), 91–110 (2004)CrossRefGoogle Scholar
  8. 8.
    Ke, Y., Sukthankar, R.: PCA-SIFT: A more distinctive representation for local image descriptors. Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, 506–513 (2004)Google Scholar
  9. 9.
    Belongie, S., Malik, J., Puzicha, J.: Matching shapes. In: Proc. of the 8th IEEE International Conference on Computer Vision, pp. 454–463. IEEE Computer Society Press, Los Alamitos (2001)Google Scholar
  10. 10.
    Schiele, B., Crowley, J.L.: Recognition without correspondence using multidimensional receptive field histograms. International Journal of Computer Vision 36(1), 31–50 (2000)CrossRefGoogle Scholar
  11. 11.
    Swain, M.J., Ballard, D.H.: Color indexing. International Journal of Computer Vision 7(1), 11–32 (1991)CrossRefGoogle Scholar
  12. 12.
    Grauman, K., Darrell, T.: Efficient image matching with distributions of local invariant features. In: Proc. of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2, pp. 627–634 (2005)Google Scholar
  13. 13.
    Tu, Z.: Probabilistic boosting-tree: Learning discriminative models for classification, recognition, and clustering. In: Proc. of the 10th IEEE International Conference on Computer Vision, vol. 2, pp. 1589–1596 (2005)Google Scholar
  14. 14.
    Marée, R., Geurts, P., Piater, J., Wehenkel, L.: Random subwindows for robust image classification. In: Proc. of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 1, pp. 34–40 (2005)Google Scholar
  15. 15.
    Nakamura, M., Nomiya, H., Uehara, K.: Improvement of boosting algorithm by modifying the weighting rule. Annals of Mathematics and Artificial Intelligence 41, 95–109 (2004)zbMATHCrossRefMathSciNetGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Hiroki Nomiya
    • 1
  • Kuniaki Uehara
    • 1
  1. 1.Graduate School of Science and Technology, Kobe University 

Personalised recommendations