Abstract
Interactions between an organism and its environment are commonly treated in the framework of Markov Decision Processes (MDP). While standard MDP is aimed solely at maximizing expected future rewards (value), the circular flow of information between the agent and its environment is generally ignored. In particular, the information gained from the environment by means of perception and the information involved in the process of action selection (i.e., control) are not treated in the standard MDP setting. In this paper, we focus on the control information and show how it can be combined with the reward measure in a unified way. Both of these measures satisfy the familiar Bellman recursive equations, and their linear combination (the free-energy) provides an interesting new optimization criterion. The tradeoff between value and information, explored using our info-rl algorithm, provides a principled justification for stochastic (soft) policies. We use computational learning theory to show that these optimal policies are also robust to uncertainties in settings with only partial knowledge of the MDP parameters.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Bertsekas, D.P.: Dynamic Programming and Optimal Control. Athena Scientific (1995)
Braun, D.A., Ortega, P.A., Theodorou, E., Schaal, S.: Path integral control and bounded rationality. To appear in Approximate Dynamic Programming and Reinforcement Learnig (2011), http://www-clmc.usc.edu/publications//D/DanielADPRL2011.pdf
Cover, T.M., Thomas, J.A.: Elements of Information Theory. Wiley, New York (1991)
Friston, K.: The free-energy principle: a rough guide to the brain? Trends Cogn. Sci. 13(7), 293–301 (2009), doi:10.1016/j.tics.2009.04.005
Fuster, J.M.: The prefrontal cortex — an update: Time is of the essence. Neuron 30, 319–333 (2001)
Kappen, B., Gomez, V., Opper, M.: Optimal control as a graphical model inference problem. ArXiv e-prints (2009)
Mcallester, D.: Simplified pac-bayesian margin bounds. In: Proc. 2007 IEEE International Symposium on Approximate Dynamic Programming and Reinforcement Learning, Hawaii of the 16th Annual Conference on Learning Theory, April 1-5 (2003)
Shannon, C.: Coding theorems for a discrete source with a fidelity criterion. IRE NATO Conv. Rec. 4, 142–163 (1959)
Sutton, R.S., Barto, A.G.: Reinforcement Learning. MIT Press, Cambridge (1998)
Tishby, N., Pereira, F.C., Bialek, W.: The information bottleneck method. In: Proc. 37th Annual Allerton Conference on Communication, Control and Computing (1999)
Tishby, N., Polani, D.: Information theory of decisions and actions. In: Vassilis, Hussain, Taylor (eds.) Perception-Reason-Action, Cognitive Neuroscience. Springer, Heidelberg (2010)
Todorov, E.: Efficient computation of optimal actions. PNAS 106(28), 11,478–11,483 (2009)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2012 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Rubin, J., Shamir, O., Tishby, N. (2012). Trading Value and Information in MDPs. In: Guy, T.V., Kárný, M., Wolpert, D.H. (eds) Decision Making with Imperfect Decision Makers. Intelligent Systems Reference Library, vol 28. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-24647-0_3
Download citation
DOI: https://doi.org/10.1007/978-3-642-24647-0_3
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-24646-3
Online ISBN: 978-3-642-24647-0
eBook Packages: EngineeringEngineering (R0)