Skip to main content

Learning Stabilizable Dynamical Systems via Control Contraction Metrics

  • Conference paper
  • First Online:
Algorithmic Foundations of Robotics XIII (WAFR 2018)

Part of the book series: Springer Proceedings in Advanced Robotics ((SPAR,volume 14))

Included in the following conference series:

Abstract

We propose a novel framework for learning stabilizable nonlinear dynamical systems for continuous control tasks in robotics. The key idea is to develop a new control-theoretic regularizer for dynamics fitting rooted in the notion of stabilizability, which guarantees that the learned system can be accompanied by a robust controller capable of stabilizing any open-loop trajectory that the system may generate. By leveraging tools from contraction theory, statistical learning, and convex optimization, we provide a general and tractable semi-supervised algorithm to learn stabilizable dynamics, which can be applied to complex underactuated systems. We validated the proposed algorithm on a simulated planar quadrotor system and observed notably improved trajectory generation and tracking performance with the control-theoretic regularized model over models learned using traditional regression techniques, especially when using a small number of demonstration examples. The results presented illustrate the need to infuse standard model-based reinforcement learning algorithms with concepts drawn from nonlinear control theory for improved reliability.

M. Pavone—This work was supported by NASA under the Space Technology Research Grants Program, Grant NNX12AQ43G, and by the King Abdulaziz City for Science and Technology (KACST).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 219.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Berkenkamp, F., Turchetta, M., Schoellig, A., Krause, A.: Safe model-based reinforcement learning with stability guarantees. In: Conference on Neural Information Processing Systems (2017)

    Google Scholar 

  2. Chua, K., Calandra, R., McAllister, R., Levine, S.: Deep reinforcement learning in a handful of trials using probabilistic dynamics models. arXiv preprint arXiv:1805.12114 (2018)

  3. Crouch, P.E., van der Schaft, A.J.: Variational and Hamiltonian Control Systems. Springer, Heidelberg (1987)

    Book  Google Scholar 

  4. Deisenroth, M.P., Rasmussen, C.E.: PILCO: a model-based and data-efficient approach to policy search. In: International Conference on Machine Learning, pp. 465–472 (2011)

    Google Scholar 

  5. Fahroo, F., Ross, I.M.: Direct trajectory optimization by a chebyshev pseudospectral method. AIAA J. Guidance Control Dyn. 25(1), 160–166 (2002)

    Article  Google Scholar 

  6. Hettich, R., Kortanek, K.O.: Semi-infinite programming: theory, methods, and applications. SIAM Rev. 35(3), 380–429 (1993)

    Article  MathSciNet  Google Scholar 

  7. Khansari-Zadeh, S.M., Billard, A.: Learning stable nonlinear dynamical systems with Gaussian mixture models. IEEE Trans. Rob. 27(5), 943–957 (2011)

    Article  Google Scholar 

  8. Khansari-Zadeh, S.M., Khatib, O.: Learning potential functions from human demonstrations with encapsulated dynamic and compliant behaviors. Auton. Robots 41(1), 45–69 (2017)

    Article  Google Scholar 

  9. Lax, P.: Linear Algebra and its Applications, 2nd edn. Wiley, Hoboken (2007)

    MATH  Google Scholar 

  10. Lemme, A., Neumann, K., Reinhart, R.F., Steil, J.J.: Neural learning of vector fields for encoding stable dynamical systems. Neurocomputing 141(1), 3–14 (2014)

    Article  Google Scholar 

  11. Liang, T., Rakhlin, A.: Just interpolate: kernel ridgeless regression can generalize. arXiv preprint arXiv:1808.00387v1 (2018)

  12. Lohmiller, W., Slotine, J.J.E.: On contraction analysis for non-linear systems. Automatica 34(6), 683–696 (1998)

    Article  MathSciNet  Google Scholar 

  13. Manchester, I.R., Slotine, J.J.E.: Control contraction metrics: convex and intrinsic criteria for nonlinear feedback design. IEEE Trans. Autom. Control 62, 3046–3053 (2017)

    Article  MathSciNet  Google Scholar 

  14. Manchester, I.R., Slotine, J.J.E.: Robust control contraction metrics: a convex approach to nonlinear state-feedback \(H-\infty \) control. IEEE Control Syst. Lett. 2(2), 333–338 (2018)

    Article  Google Scholar 

  15. Manchester, I., Tang, J.Z., Slotine, J.J.E.: Unifying classical and optimization-based methods for robot tracking control with control contraction metrics. In: International Symposium on Robotics Research (2015)

    Google Scholar 

  16. Medina, J.R., Billard, A.: Learning stable task sequences from demonstration with linear parameter varying systems and hidden Markov models. In: Conference on Robot Learning, pp. 175–184 (2017)

    Google Scholar 

  17. Nagabandi, A., Kahn, G., Fearing, R.S., Levine, S.: Neural network dynamics for model-based deep reinforcement learning with model-free fine-tuning. arXiv preprint arXiv:1708.02596 (2017)

  18. Ravichandar, H., Salehi, I., Dani, A.: Learning partially contracting dynamical systems from demonstrations. In: Conference on Robot Learning (2017)

    Google Scholar 

  19. Sanner, R.M., Slotine, J.J.E.: Gaussian networks for direct adaptive control. IEEE Trans. Neural Netw. 3(6), 837–863 (1992)

    Article  Google Scholar 

  20. Sindhwani, V., Tu, S., Khansari, M.: Learning contracting vector fields for stable imitation learning. arXiv preprint arXiv:1804.04878 (2018)

  21. Singh, S., Majumdar, A., Slotine, J.J.E., Pavone, M.: Robust online motion planning via contraction theory and convex optimization. In: Proceedings of the IEEE Conference on Robotics and Automation (2017). Extended Version: http://asl.stanford.edu/wp-content/papercite-data/pdf/Singh.Majumdar.Slotine.Pavone.ICRA17.pdf

  22. Singh, S., Sindhwani, V., Slotine, J.J., Pavone, M.: Learning stabilizable dynamical systems via control contraction metrics. In: Workshop on Algorithmic Foundations of Robotics (2018, in Press). https://arxiv.org/abs/1808.00113

  23. Slotine, J.J.E., Li, W.: On the adaptive control of robot manipulators. Int. J. Robot. Res. 6(3), 49–59 (1987)

    Article  Google Scholar 

  24. Venkatraman, A., Capobianco, R., Pinto, L., Hebert, M., Nardi, D., Bagnell, A.: Improved learning of dynamics models for control. In: International Symposium on Experimental Robotics, pp. 703–713. Springer (2016)

    Google Scholar 

  25. Venkatraman, A., Hebert, M., Bagnell, J.A.: Improving multi-step prediction of learned time series models. In: Proceedings of the AAAI Conference on Artificial Intelligence (2015)

    Google Scholar 

  26. Zhang, L., Wu, S.Y., López, M.A.: A new exchange method for convex semi-infinite programming. SIAM J. Optim. 20(6), 2959–2977 (2010)

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sumeet Singh .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Singh, S., Sindhwani, V., Slotine, JJ.E., Pavone, M. (2020). Learning Stabilizable Dynamical Systems via Control Contraction Metrics. In: Morales, M., Tapia, L., Sánchez-Ante, G., Hutchinson, S. (eds) Algorithmic Foundations of Robotics XIII. WAFR 2018. Springer Proceedings in Advanced Robotics, vol 14. Springer, Cham. https://doi.org/10.1007/978-3-030-44051-0_11

Download citation

Publish with us

Policies and ethics