An agent is a system that interacts with an environment continually and without human assistance in order to carry out a predefined task. We are interested in developing artificial agents that act rationally, in the sense that they are able to maximize a suitable utility function. In this chapter, we describe the main problems underlying the realization of rational agents and present commonly adopted mathematical models. In particular, we consider the case in which the environment can be modeled as a finite state stochastic process and address the problem of developing agents that can learn to act rationally through their own experience.
Unable to display preview. Download preview PDF.
- 6.L.P. Kaelbling, L.M. Littman, and A.W. Moore, Reinforcement learning: A survey. Journal of Artificial Intelligence Research, 4, 237–285 (1996).Google Scholar
- 7.R.S. Sutton and A.G. Barto, Reinforcement leaning: An introduction, MIT Press, Cambridge, MA (1998).Google Scholar
- 8.C.J.C.H. Watkins, Learning with delayed rewards, Ph.D. dissertation, University of Cambridge, Cambridge, UK (1989).Google Scholar
- 11.S.P. Singh, T. Jaakkola and M.L. Jordan, Learning without state-estimation in partially observable Markovian decision processes, in W.W. Cohen and H. Hirsch eds., Proc. 11th Int. Conf on Machine Learning, Morgan Kaufman, San Francisco, CA, 284–292 (1994).Google Scholar
- 13.P.L. Lanzi, Adaptive agents with reinforcement learning and internal memory, Proc. 6th International Conference on the Simulation of Adaptive Behavior (SAB 2000), Paris, F(2000).Google Scholar
- 15.A. Bonarini and M. Matteucci, Learning context motivation in coordinated behaviors, Proc. 6th Intelligent Autonomous Systems Conference (IAS-6), IOS Press, Amsterdam, NL, 519–526 (2000).Google Scholar