Abstract
The methods presented in Chapter 3 allow the representation-learning capacity of evolutionary algorithms like NEAT to be harnessed in both off-line and on-line scenarios. However, that capacity is still limited in scope to policy search methods. Hence, Sutton and Barto’s criticism (that policy search methods, unlike temporal difference methods, do not exploit the specific structure of the reinforcement learning problem) still applies. To address this problem, we need methods that can optimize representations, not just for policies, but value function approximators trained with temporal difference methods.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Rights and permissions
Copyright information
© 2010 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Whiteson, S. (2010). Evolutionary Function Approximation. In: Adaptive Representations for Reinforcement Learning. Studies in Computational Intelligence, vol 291. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-13932-1_4
Download citation
DOI: https://doi.org/10.1007/978-3-642-13932-1_4
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-13931-4
Online ISBN: 978-3-642-13932-1
eBook Packages: EngineeringEngineering (R0)