Advertisement

Reinforcement Learning for Optimal Feedback Control

A Lyapunov-Based Approach

  • Rushikesh Kamalapurkar
  • Patrick Walters
  • Joel Rosenfeld
  • Warren Dixon

Part of the Communications and Control Engineering book series (CCE)

Table of contents

  1. Front Matter
    Pages i-xvi
  2. Rushikesh Kamalapurkar, Patrick Walters, Joel Rosenfeld, Warren Dixon
    Pages 1-16
  3. Rushikesh Kamalapurkar, Patrick Walters, Joel Rosenfeld, Warren Dixon
    Pages 17-42
  4. Rushikesh Kamalapurkar, Patrick Walters, Joel Rosenfeld, Warren Dixon
    Pages 43-98
  5. Rushikesh Kamalapurkar, Patrick Walters, Joel Rosenfeld, Warren Dixon
    Pages 99-148
  6. Rushikesh Kamalapurkar, Patrick Walters, Joel Rosenfeld, Warren Dixon
    Pages 149-193
  7. Rushikesh Kamalapurkar, Patrick Walters, Joel Rosenfeld, Warren Dixon
    Pages 195-225
  8. Rushikesh Kamalapurkar, Patrick Walters, Joel Rosenfeld, Warren Dixon
    Pages 227-263
  9. Back Matter
    Pages 265-293

About this book

Introduction

Reinforcement Learning for Optimal Feedback Control develops model-based and data-driven reinforcement learning methods for solving optimal control problems in nonlinear deterministic dynamical systems. In order to achieve learning under uncertainty, data-driven methods for identifying system models in real-time are also developed. The book illustrates the advantages gained from the use of a model and the use of previous experience in the form of recorded data through simulations and experiments. The book’s focus on deterministic systems allows for an in-depth Lyapunov-based analysis of the performance of the methods described during the learning phase and during execution.

To yield an approximate optimal controller, the authors focus on theories and methods that fall under the umbrella of actor–critic methods for machine learning. They concentrate on establishing stability during the learning phase and the execution phase, and adaptive model-based and data-driven reinforcement learning, to assist readers in the learning process, which typically relies on instantaneous input-output measurements.

This monograph provides academic researchers with backgrounds in diverse disciplines from aerospace engineering to computer science, who are interested in optimal reinforcement learning functional analysis and functional approximation theory, with a good introduction to the use of model-based methods. The thorough treatment of an advanced treatment to control will also interest practitioners working in the chemical-process and power-supply industry.

Keywords

Nonlinear Control Lyapunov-based Control Reinforcement Learning Optimal Control Dynamic Programming Actor–Critic Methods Real-time System Identification

Authors and affiliations

  • Rushikesh Kamalapurkar
    • 1
  • Patrick Walters
    • 2
  • Joel Rosenfeld
    • 3
  • Warren Dixon
    • 4
  1. 1.Mechanical and Aerospace EngineeringOklahoma State UniversityStillwaterUSA
  2. 2.Naval Surface Warfare CenterPanama CityUSA
  3. 3.Electrical EngineeringVanderbilt UniversityNashvilleUSA
  4. 4.Department of Mechanical and Aerospace EngineeringUniversity of FloridaGainesvilleUSA

Bibliographic information

  • DOI https://doi.org/10.1007/978-3-319-78384-0
  • Copyright Information Springer International Publishing AG 2018
  • Publisher Name Springer, Cham
  • eBook Packages Engineering
  • Print ISBN 978-3-319-78383-3
  • Online ISBN 978-3-319-78384-0
  • Series Print ISSN 0178-5354
  • Series Online ISSN 2197-7119
  • Buy this book on publisher's site
Industry Sectors
Pharma
Materials & Steel
Automotive
Biotechnology
Electronics
IT & Software
Telecommunications
Energy, Utilities & Environment
Aerospace
Oil, Gas & Geosciences
Engineering