Skip to main content

Artificial Curiosity Driven Robots with Spatiotemporal Regularity Discovery Ability

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 8589))

Abstract

Autonomous reinforcement learning (RL) robots usually need to learn from raw, high dimensional data generated by visual sensors and often corrupted by noise. These sorts of tasks are quite challenging and cannot be addressed without an efficient mechanism to encode and simplify the raw data. A recent study proposed an artificial curios robot (ACR) for this problem. However, this model is incapable of handling non-Markovian tasks and discovering spatiotemporal patterns in its milieu. This paper presents a method to solve this problem by extending ACR. A straightforward, but not efficient, solution is to keep recoding of previous observations which makes the algorithm intractable. We, instead, construct a perceptual context in a compact way. Using different environments, we show that the proposed algorithm can discover the regularity in its environment without any prior information on the task.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Lin, L.-J., Mitchell, T.M.: Reinforcement learning with hidden states. In: Proc. 2nd Int. Conf. From Animals to Animats, pp. 271–280 (1993)

    Google Scholar 

  2. Chrisman, L.: Reinforcement learning with perceptual aliasing: The perceptual distinctions approach. In: Proc. 10th Nat. Conf. AI, pp. 183–188. AAAI Press (1992)

    Google Scholar 

  3. Bakker, B.: Reinforcement Learning with Long Short-Term Memory. In: Neural Information Processing Systems (NIPS), pp. 1475–1482 (2001)

    Google Scholar 

  4. Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (1998)

    Google Scholar 

  5. Kaelbling, L.P., Littman, M.L., Moore, A.W.: Reinforcement Learning: A Survey. Journal of Artificial Intelligence Research 4, 237–285 (1996)

    Google Scholar 

  6. Sutton, R.S.: Integrated architectures for learning, planning, and reacting based on approximating dynamic programming. In: Proc. 7th Int. Conf. Machine Learning, pp. 216–224 (1990)

    Google Scholar 

  7. Vigorito, C.M., Barto, A.G.: Intrinsically Motivated Hierarchical Skill Learning in Structured Environments. IEEE Trans. Auton. Mental Develop. 2, 132–143 (2010)

    Article  Google Scholar 

  8. Barto, A.G., Singh, S., Chentanez, N.: Intrinsically motivated learning of hierarchical collections of skills. In: Proc. 3rd Int. Conf. Development and Learning, La Jolla, CA, pp. 112–119 (2004)

    Google Scholar 

  9. Schmidhuber, J.: A possibility for implementing curiosity and boredom in model-building neural controllers. In: Meyer, J.A., Wilson, S.W. (eds.) Proc. Int. Conf. Simulation of Adaptive Behavior: From Animals to Animats, pp. 222–227. MIT Press (1991)

    Google Scholar 

  10. Oudeyer, P.-Y., Kaplan, F.: What is intrinsic motivation? A typology of computational approaches. Frontiers in Neurorobotics 1 (2007)

    Google Scholar 

  11. Lopes, M., Montesano, L.: Active Learning for Autonomous Intelligent Agents: Exploration, Curiosity, and Interaction. arXiv preprint arXiv:1403.1497v1 (2014)

    Google Scholar 

  12. Frank, M., Leitner, J., Stollenga, M., Förster, A., Schmidhuber, J.: Curiosity driven reinforcement learning for motion planning on humanoids. Frontiers in Neurorobotics 7 (2014)

    Google Scholar 

  13. Ngo, H., Luciw, M., Förster, A., Schmidhuber, J.: Confidence-based progress-driven self-generated goals for skill acquisition in developmental robots. Frontiers in Psychology 4 (2013)

    Google Scholar 

  14. Luciw, M., Graziano, V., Ring, M., Schmidhuber, J.: Artificial curiosity with planning for autonomous perceptual and cognitive development. In: 2011 IEEE Int. Conf. on Development and Learning (ICDL), pp. 1–8. IEEE Press (2011)

    Google Scholar 

  15. Asada, M., Hosoda, K., Kuniyoshi, Y., Ishiguro, H., Inui, T., Yoshikawa, Y., Ogino, M., Yoshida, C.: Cognitive Developmental Robotics: A Survey. IEEE Trans. Auton. Mental Develop. 1, 12–34 (2009)

    Article  Google Scholar 

  16. Hruby, R., Maas, L.M., Fedor-Freybergh, P.G.: Early brain development toward shaping of human mind: an integrative psychoneurodevelopmental model in prenatal and perinatal medicine. Neuro. Endocrinol. Lett. 34, 447–463 (2013)

    Google Scholar 

  17. Andreakis, A., Hoyningen-Huene, N.v., Beetz, M.: Incremental unsupervised time series analysis using merge growing neural gas. In: Príncipe, J.C., Miikkulainen, R. (eds.) WSOM 2009. LNCS, vol. 5629, pp. 10–18. Springer, Heidelberg (2009)

    Chapter  Google Scholar 

  18. Carpenter, G.A., Grossberg, S., Rosen, D.B.: Fuzzy ART: Fast stable learning and categorization of analog patterns by an adaptive resonance system. Neural Networks 4, 759–771 (1991)

    Article  Google Scholar 

  19. Weng, J., Luciw, M.: Dually optimal neuronal layers: Lobe component analysis. IEEE Trans. Auton. Mental Develop. 1, 68–85 (2009)

    Article  Google Scholar 

  20. Schmidhuber, J.: Formal Theory of Creativity, Fun, and Intrinsic Motivation. IEEE Trans. Auton. Mental Develop. 2, 230–247 (2010)

    Article  Google Scholar 

  21. Singh, S.P., Sutton, R.S.: Reinforcement learning with replacing eligibility traces. Machine Learning 22, 123–158 (1996)

    MATH  Google Scholar 

  22. Lagoudakis, M.G., Parr, R.: Least-squares policy iteration. The Journal of Machine Learning Research 4, 1107–1149 (2003)

    MathSciNet  Google Scholar 

  23. Lange, S., Riedmiller, M.: Deep auto-encoder neural networks in reinforcement learning. In: The 2010 Int. Joint Conf. Neural Networks (IJCNN), pp. 1–8. IEEE Press (2010)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer International Publishing Switzerland

About this paper

Cite this paper

Kalhor, D., Loo, C.K. (2014). Artificial Curiosity Driven Robots with Spatiotemporal Regularity Discovery Ability. In: Huang, DS., Jo, KH., Wang, L. (eds) Intelligent Computing Methodologies. ICIC 2014. Lecture Notes in Computer Science(), vol 8589. Springer, Cham. https://doi.org/10.1007/978-3-319-09339-0_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-09339-0_9

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-09338-3

  • Online ISBN: 978-3-319-09339-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics