Temporal Difference Coding in Reinforcement Learning
In this paper, we regard the sequence of returns as outputs from a parametric compound source. The coding rate of the source shows the amount of information on the return, so the information gain concerning future information is given by the sum of the discounted coding rates. We accordingly formulate a temporal difference learning for estimating the expected information gain, and give a convergence proof of the information gain under certain conditions. As an example of applications, we propose the ratio w of return loss to information gain to be used in probabilistic action selection strategies. We found in experiments that our w-based strategy performs well compared with the conventional Q-based strategy.
KeywordsReinforcement Learn Information Gain Markov Decision Process Return Loss Entropy Rate
Unable to display preview. Download preview PDF.
- 1.Sutton, R.S., Barto, A.G.: Reinforcement Learning:An Introduction. Adaptive Computation and Machine Learning. MIT Press, Cambridge (1998)Google Scholar
- 2.Zhang, W., Dietterich, T.G.: A reinforcement learning approach to job-stop scheduling. In: Mellish, C.S. (ed.) Proceedings of the 14th International Joint Conference on Artificial Intelligence, Montreal, Canada, pp. 1114–1120. Morgan Kaufmann, San Mateo (1995)Google Scholar
- 7.Han, T.S., Kobayashi, K.: Mathematics of Information and Coding. In: Translations of Mathematical Monographs, vol. 203, American Mathematical Society, Providence (2002) (Translated by Joe Suzuki)Google Scholar
- 10.Sato, M., Kobayashi, S.: Average-reward reinforcement learning for variance penalized markov decision problems. In: Brodley, C.E., Danyluk, A.P. (eds.) Proceedings of the 18th International Conference on Machine Learning, Williams College, pp. 473–480. Morgan Kaufmann Publishers, San Francisco (2001)Google Scholar