Advertisement

A Curious Vision System for Autonomous and Cumulative Object Learning

  • Pramod ChandrashekhariahEmail author
  • Gabriele Spina
  • Jochen Triesch
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 458)

Abstract

We introduce a fully autonomous active vision system that explores its environment and learns visual representations of objects in the scene. The system design is motivated by the fact that infants learn internal representations of the world without much human assistance. Inspired by this, we build a curiosity driven system that is drawn towards locations in the scene that provide the highest potential for learning. In particular, the attention on a stimulus in the scene is related to the improvement in its internal model. This makes the system learn dynamic changes of object appearance in a cumulative fashion. We also introduce a self-correction mechanism in the system that rectifies situations where several distinct models have been learned for the same object or a single model has been learned for adjacent objects. We demonstrate through experiments that the curiosity-driven learning leads to a higher learning speed and improved accuracy.

Keywords

Active vision Unsupervised learning Autonomous vision system Vision for robotics Humanoid robot Icub Object recognition Visual attention Stereo vision Intrinsic motivation 

Notes

Acknowledgements

This work was supported by the BMBF Project “Bernstein Fokus: Neurotechnologie Frankfurt, FKZ 01GQ0840” and by the “IM-CLeVeR - Intrinsically Motivated Cumulative Learning Versatile Robots” project, FP7-ICT-IP-231722. We thank Richard Veale, Indiana University for providing the code on saliency.

References

  1. 1.
    Kim, H., Murphy-Chutorian, E., Triesch, J.: Semi-autonomous learning of objects. In: Conference on Computer Vision and Pattern Recognition Workshop, CVPRW ’06, p. 145 (2006)Google Scholar
  2. 2.
    Wersing, H., Kirstein, S., Gtting, M., Brandl, H., Dunn, M., Mikhailova, I., Goerick, C., Steil, J., Ritter, H., Krner, E.: Online learning of objects in a biologically motivated visual architecture. Int. J. Neural Syst. 17(4), 219–230 (2007)CrossRefGoogle Scholar
  3. 3.
    Figueira, D., Lopes, M., Ventura, R., Ruesch, J.: From pixels to objects: enabling a spatial model for humanoid social robots. In: IEEE International Conference on Robotics and Automation, ICRA 2009, pp. 3049–3054 (2009)Google Scholar
  4. 4.
    Gatsoulis, Y., Burbridge, C., McGinnity, T.: Online unsupervised cumulative learning for life-long robot operation. In: 2011 IEEE International Conference on Robotics and Biomimetics (ROBIO), pp. 2486–2490 (2011)Google Scholar
  5. 5.
    Begum, M., Karray, F.: Visual attention for robotic cognition: a survey. IEEE Trans. Auton. Ment. Dev. 3(1), 92–105 (2011)CrossRefGoogle Scholar
  6. 6.
    Baranes, A., Oudeyer, P.-Y.: R-iac: robust intrinsically motivated exploration and active learning. IEEE Trans. Auton. Ment. Dev. 1(3), 155–169 (2009)CrossRefGoogle Scholar
  7. 7.
    Schmidhuber, J.: Formal theory of creativity, fun, and intrinsic motivation (1990–2010). IEEE Trans. Auton. Ment. Dev. 2(3), 230–247 (2010)CrossRefGoogle Scholar
  8. 8.
    Baldassarre, G.: What are intrinsic motivations? a biological perspective. In: 2011 IEEE International Conference on Development and Learning (ICDL), vol. 2, pp. 1–8 (2011)Google Scholar
  9. 9.
    Wang, Q., Chandrashekhariah, P., Spina, G.: Familiarity-to-novelty shift driven by learning: a conceptual and computational model. In: 2011 IEEE International Conference on Development and Learning (ICDL), vol. 2, pp. 1–6 (2011)Google Scholar
  10. 10.
    Metta, G., Sandini, G., Vernon, D., Natale, L., Nori, F.: The icub humanoid robot: an open platform for research in embodied cognition. In: Proceedings of the 8th Workshop on Performance Metrics for Intelligent Systems, PerMIS ’08, pp. 50–56. ACM, New York (2008)Google Scholar
  11. 11.
    Agarwal, S., Roth, D.: Learning a sparse representation for object detection. In: Heyden, A., Sparr, G., Nielsen, M., Johansen, P. (eds.) ECCV 2002. LNCS, vol. 2353, pp. 113–127. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  12. 12.
    Harris, C., Stephens, M.: A combined corner and edge detector. In: Proceedings of Fourth Alvey Vision Conference, pp. 147–151 (1988)Google Scholar
  13. 13.
    Wiskott, L., Fellous, J.-M., Kuiger, N., von der Malsburg, C.: Face recognition by elastic bunch graph matching. IEEE Trans. Pattern Anal. Mach. Intell. 19(7), 775–779 (1997)CrossRefGoogle Scholar
  14. 14.
    Jones, J., Palmer, L.: An evaluation of the two-dimensional Gabor filter model of simple receptive fields in cat striate cortex. J. Neurophysiol. 58(6), 1233–1258 (1987)Google Scholar
  15. 15.
    Triesch, J., Triesch, J., von der Malsburg, C.: Democratic integration: self-organized integration of adaptive cues. Neural Comput. 13, 2049–2074 (2001)CrossRefzbMATHGoogle Scholar
  16. 16.
    Murphy-Chutorian, E., Triesch, J.: Shared features for scalable appearance-based object recognition. In: Seventh IEEE Workshops on Application of Computer Vision, WACV/MOTIONS ’05 Volume 1, vol. 1, pp. 16–21 (2005)Google Scholar
  17. 17.
    Ballard, D.H.: Generalizing the hough transform to detect arbitrary shapes. In: Fischler, M.A., Firschein, O. (eds.) Readings in Computer Vision: Issues, Problems, Principles, and Paradigms, pp. 714–725. Morgan Kaufmann Publishers Inc., San Francisco (1987)CrossRefGoogle Scholar
  18. 18.
    Itti, L., Koch, C.: Computational modelling of visual attention. Nat. Rev. Neurosci. 2(3), 194–203 (2001)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2014

Authors and Affiliations

  • Pramod Chandrashekhariah
    • 1
    Email author
  • Gabriele Spina
    • 1
  • Jochen Triesch
    • 1
  1. 1.Frankfurt Institute for Advanced Studies (FIAS)Johann Wolfgang Goethe UniversityFrankfurt am MainGermany

Personalised recommendations