Advertisement

Combining Attention and Value Maps

  • Stathis Kasderidis
  • John G. Taylor
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3696)

Abstract

We present an approach where we combine attention with value maps for the purpose of acquiring a decision-making policy for multiple concurrent goals. The former component is essential for dealing with an uncertain and open environment while the latter offers a general model for building decision-making systems based on reward information. We discuss the multiple goals policy acquisition problem and justify our approach. We provide simulation results that support our solution.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Sutton, R., Barto, A.: Reinforcement Learning: An Introduction, 4th edn. MIT Press, Cambridge (2002)Google Scholar
  2. 2.
    Rushworth, M.F.S., et al.: The left parietal cortex and motor attention. Neuropsychologia 33, 1261–1273 (1997)CrossRefGoogle Scholar
  3. 3.
    Taylor, J.G.: Attentional movement: the control basis for Consciousness. Soc. Neuroscience Abstracts 26, 2231, #893.3 (2000)Google Scholar
  4. 4.
    Kasderidis, S., Taylor, J.G.: Attentional Agents and Robot Control. International Journal of Knowledge-based and Intelligent Systems 8, 69–89 (2004)Google Scholar
  5. 5.
    Kasderidis, S., Taylor, J.G.: Attention-based Learning. In: International Joint Conference on Neural Networks (IJCNN 2004), Budapest, 25-29 July, pp. 525–531 (2004)Google Scholar
  6. 6.
    Mitchell, T.: Machine Learning. McGraw-Hill, New York (1997)zbMATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2005

Authors and Affiliations

  • Stathis Kasderidis
    • 1
  • John G. Taylor
    • 2
  1. 1.Foundation for Research and Technology – HellasInstitute of Computer ScienceHeraklionGreece
  2. 2.Dept. of MathematicsKing’s CollegeLondonUK

Personalised recommendations