Skip to main content

Part of the book series: Intelligent Systems Reference Library ((ISRL,volume 28))

Abstract

Interactions between an organism and its environment are commonly treated in the framework of Markov Decision Processes (MDP). While standard MDP is aimed solely at maximizing expected future rewards (value), the circular flow of information between the agent and its environment is generally ignored. In particular, the information gained from the environment by means of perception and the information involved in the process of action selection (i.e., control) are not treated in the standard MDP setting. In this paper, we focus on the control information and show how it can be combined with the reward measure in a unified way. Both of these measures satisfy the familiar Bellman recursive equations, and their linear combination (the free-energy) provides an interesting new optimization criterion. The tradeoff between value and information, explored using our info-rl algorithm, provides a principled justification for stochastic (soft) policies. We use computational learning theory to show that these optimal policies are also robust to uncertainties in settings with only partial knowledge of the MDP parameters.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Bertsekas, D.P.: Dynamic Programming and Optimal Control. Athena Scientific (1995)

    Google Scholar 

  2. Braun, D.A., Ortega, P.A., Theodorou, E., Schaal, S.: Path integral control and bounded rationality. To appear in Approximate Dynamic Programming and Reinforcement Learnig (2011), http://www-clmc.usc.edu/publications//D/DanielADPRL2011.pdf

  3. Cover, T.M., Thomas, J.A.: Elements of Information Theory. Wiley, New York (1991)

    Book  MATH  Google Scholar 

  4. Friston, K.: The free-energy principle: a rough guide to the brain? Trends Cogn. Sci. 13(7), 293–301 (2009), doi:10.1016/j.tics.2009.04.005

    Article  Google Scholar 

  5. Fuster, J.M.: The prefrontal cortex — an update: Time is of the essence. Neuron 30, 319–333 (2001)

    Article  Google Scholar 

  6. Kappen, B., Gomez, V., Opper, M.: Optimal control as a graphical model inference problem. ArXiv e-prints (2009)

    Google Scholar 

  7. Mcallester, D.: Simplified pac-bayesian margin bounds. In: Proc. 2007 IEEE International Symposium on Approximate Dynamic Programming and Reinforcement Learning, Hawaii of the 16th Annual Conference on Learning Theory, April 1-5 (2003)

    Google Scholar 

  8. Shannon, C.: Coding theorems for a discrete source with a fidelity criterion. IRE NATO Conv. Rec. 4, 142–163 (1959)

    Google Scholar 

  9. Sutton, R.S., Barto, A.G.: Reinforcement Learning. MIT Press, Cambridge (1998)

    Google Scholar 

  10. Tishby, N., Pereira, F.C., Bialek, W.: The information bottleneck method. In: Proc. 37th Annual Allerton Conference on Communication, Control and Computing (1999)

    Google Scholar 

  11. Tishby, N., Polani, D.: Information theory of decisions and actions. In: Vassilis, Hussain, Taylor (eds.) Perception-Reason-Action, Cognitive Neuroscience. Springer, Heidelberg (2010)

    Google Scholar 

  12. Todorov, E.: Efficient computation of optimal actions. PNAS 106(28), 11,478–11,483 (2009)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Rubin, J., Shamir, O., Tishby, N. (2012). Trading Value and Information in MDPs. In: Guy, T.V., Kárný, M., Wolpert, D.H. (eds) Decision Making with Imperfect Decision Makers. Intelligent Systems Reference Library, vol 28. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-24647-0_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-24647-0_3

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-24646-3

  • Online ISBN: 978-3-642-24647-0

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics