Skip to main content

Multi-timescale Nexting in a Reinforcement Learning Robot

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 7426))

Abstract

The term “nexting” has been used by psychologists to refer to the propensity of people and many other animals to continually predict what will happen next in an immediate, local, and personal sense. The ability to “next” constitutes a basic kind of awareness and knowledge of one’s environment. In this paper we present results with a robot that learns to next in real time, predicting thousands of features of the world’s state, including all sensory inputs, at timescales from 0.1 to 8 seconds. This was achieved by treating each state feature as a reward-like target and applying temporal-difference methods to learn a corresponding value function with a discount rate corresponding to the timescale. We show that two thousand predictions, each dependent on six thousand state features, can be learned and updated online at better than 10Hz on a laptop computer, using the standard TD(λ) algorithm with linear function approximation. We show that this approach is efficient enough to be practical, with most of the learning complete within 30 minutes. We also show that a single tile-coded feature representation suffices to accurately predict many different signals at a significant range of timescales. Finally, we show that the accuracy of our learned predictions compares favorably with the optimal off-line solution.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • Brogden, W.: Sensory pre-conditioning. Journal of Experimental Psychology 25(4), 323–332 (1939)

    Article  Google Scholar 

  • Butz, M.V., Sigaud, O., Gérard, P. (eds.): Anticipatory Behavior in Adaptive Learning Systems. LNCS (LNAI), vol. 2684. Springer, Heidelberg (2003)

    MATH  Google Scholar 

  • Carlsson, K., Petrovic, P., Skare, S., Petersson, K., Ingvar, M.: Tickling expectations: neural processing in anticipation of a sensory stimulus. Journal of Cognitive Neuroscience 12(4), 691–703 (2000)

    Article  Google Scholar 

  • Clark, A.: Whatever Next? Predictive Brains, Situated Agents, and the Future of Cognitive Science. Behavioral and Brain Sciences (in press)

    Google Scholar 

  • Dayan, P., Hinton, G.: Feudal reinforcement learning. In: Advances in Neural Information Processing Systems 5, pp. 271–278 (1993)

    Google Scholar 

  • Gilbert, D.: Stumbling on Happiness. Knopf Press (2006)

    Google Scholar 

  • Grush, R.: The emulation theory of representation: motor control, imagery, and perception. Behavioural and Brain Sciences 27, 377–442 (2004)

    Google Scholar 

  • Hawkins, J., Blakeslee, S.: On Intelligence. Times Books (2004)

    Google Scholar 

  • Huron, D.: Sweet anticipation: Music and the Psychology of Expectation. MIT Press (2006)

    Google Scholar 

  • Kaelbling, L.: Learning to achieve goals. In: Proceedings of International Joint Conference on Artificial Intelligence (1993)

    Google Scholar 

  • Levitin, D.: This is Your Brain on Music. Dutton Books (2006)

    Google Scholar 

  • Pavlov, I.: Conditioned Reflexes: An Investigations of the Physiological Activity of the Cerebral Cortex, translated and edited by Anrep, G.V. Oxford University Press (1927)

    Google Scholar 

  • Pezzulo, G.: Coordinating with the future: The anticipatory nature of representation. Minds and Machines 18(2), 179–225 (2008)

    Article  Google Scholar 

  • Rescorla, R.: Simultaneous and successive associations in sensory preconditioning. Journal of Experimental Psychology: Animal Behavior Processes 6(3), 207–216 (1980)

    Article  Google Scholar 

  • Singh, S.: Reinforcement learning with a hierarchy of abstract models. In: Proceedings of the Tenth National Conference on Artificial Intelligence, pp. 202–207 (1992)

    Google Scholar 

  • Sutton, R.S.: Learning to predict by the method of temporal differences. Machine Learning 3, 9–44 (1988)

    Google Scholar 

  • Sutton, R.S.: Integrated architectures for learning, planning, and reacting based on approximating dynamic programming. In: Proceedings of the Seventh International Conference on Machine Learning, pp. 216–224 (1990)

    Google Scholar 

  • Sutton, R.S.: TD models: Modeling the world at a mixture of time scales. In: Proceedings of the International Conference on Machine Learning, pp. 531–539 (1995)

    Google Scholar 

  • Sutton, R.S.: The grand challenge of predictive empirical abstract knowledge. In: Working Notes of the IJCAI 2009 Workshop on Grand Challenges for Reasoning from Experiences (2009)

    Google Scholar 

  • Sutton, R.S., Barto, A.G.: Time-derivative models of Pavlovian reinforcement. In: Learning and Computational Neuroscience: Foundations of Adaptive Networks, pp. 497–537. MIT Press (1990)

    Google Scholar 

  • Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press (1998)

    Google Scholar 

  • Sutton, R.S., Modayil, J., Delp, M., Degris, T., Pilarski, P.M., White, A., Precup, D.: Horde: A scalable real-time architecture for learning knowledge from unsupervised sensorimotor interaction. In: Proceedings of the 10th International Conference on Autonomous Agents and Multiagent Systems, pp. 761–768 (2011)

    Google Scholar 

  • Sutton, R.S., Precup, D., Singh, S.: Between MDPs and semi-MDPs: A framework for temporal abstraction in reinforcement learning. Artificial Intelligence 112, 181–211 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  • Tolman, E.C.: Purposive Behavior in Animals and Men. University of California Press (1951)

    Google Scholar 

  • Wolpert, D., Ghahramani, Z., Jordan, M.: An internal model for sensorimotor integration. Science 269(5232), 1880–1882 (1995)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Modayil, J., White, A., Sutton, R.S. (2012). Multi-timescale Nexting in a Reinforcement Learning Robot. In: Ziemke, T., Balkenius, C., Hallam, J. (eds) From Animals to Animats 12. SAB 2012. Lecture Notes in Computer Science(), vol 7426. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-33093-3_30

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-33093-3_30

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-33092-6

  • Online ISBN: 978-3-642-33093-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics