Deep Reinforcement Learning pp 153-172 | Cite as
A3C in Code
Coding the Asynchronous Advantage Actor-Critic Agent
Chapter
First Online:
- 3.6k Downloads
Abstract
In this chapter, we will cover the Asynchronous Advantage Actor-Critic Model. We use the TensorFlow’s own implementation of the Keras for this. We define the actor-critic model using the Sub-Classing and eager execution functionality of Keras. Both the master and worker agents use this model. The asynchronous workers are implemented as different threads, syncing with the master after every few steps or completion of their respective episodes.
Copyright information
© Springer Nature Singapore Pte Ltd. 2019