Skip to main content

Part of the book series: Studies in Computational Intelligence ((SCI,volume 139))

  • 534 Accesses

Contents

Having until now concentrated on how LCS can handle regression and classification tasks, this chapter returns to the prime motivator for LCS, which are sequential decision tasks. There has been little theoretical LCS work that concentrates on these tasks (for example, [30, 224]) despite some obvious problems that need to be solved [11, 12, 77]. At the same time, other machine learning methods have constantly improved their performance in handling these tasks [126, 28, 204], based on extensive theoretical advances. In order to catch up with these methods, LCS need to refine their theory if they want to be able to feature competitive performance. This chapter provides a strong basis for further theoretical development within the MDP framework, and discusses some currently relevant issues.

Sequential decision tasks are, in general, characterised by having a set of states and actions, where an action performed in a particular state causes a transition to the same or another state. Each transition is mediated by a scalar reward, and the aim is to perform actions in particular states such that the sum of rewards received is maximised in the long run. How to choose an action for a given state is determined by the policy. Even though the space of possible policies could be searched directly, a more common and more efficient approach is to learn for each state the sum of future rewards that one can expect to receive from that state, and derive the optimal policy from that knowledge.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2008 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Drugowitsch, J. (2008). Towards Reinforcement Learning with LCS. In: Design and Analysis of Learning Classifier Systems. Studies in Computational Intelligence, vol 139. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-79866-8_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-79866-8_9

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-79865-1

  • Online ISBN: 978-3-540-79866-8

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics