Skip to main content

Reinforcement Learning Applied to a Human Arm Model

  • Conference paper
  • First Online:
Multibody Dynamics 2019 (ECCOMAS 2019)

Part of the book series: Computational Methods in Applied Sciences ((COMPUTMETHODS,volume 53))

  • 1800 Accesses

Abstract

In this contribution, we focus on a muscle actuated human arm model [1] and discuss the applicability of Reinforcement Learning (RL) [2] in order to control it. The content is divided into five sections. We start with the introduction of the human arm model and continue with the optimization method the authors of the model applied. Afterwards, we bring the optimization problem into a form such that RL can handle it and introduce the RL approach we are planning to apply. Before we close with the conclusion, we have a look at the results of the techniques in the numerics section.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 219.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Roller, M., Björkenstam, S., Linn, J., Leyendecker, S.: Optimal control of a biomechanical multibody model for the dynamic simulation of working tasks. In: Proceedings of the 8th ECCOMAS Thematic Conference on Multibody Dynamics, Prague, pp. 817–826 (2017)

    Google Scholar 

  2. Sutton, R.S., Barto, A.G.: Introduction to Reinforcement Learning, 1st edn. MIT Press, Cambridge (1998)

    MATH  Google Scholar 

  3. Obentheuer, M., Roller, M., Björkenstam, S., Berns, K., Linn, J.: Comparison of different actuation modes of a biomechanical human arm model in an optimal control framework. In: Proceedings of the 5th Joint International Conference on Multibody System Dynamics IMSD, Lisbon (2018)

    Google Scholar 

  4. Björkenstam, S., Leyendecker, S., Linn, J., Carlson, J.S., Lennartson, B.: Inverse dynamics for discrete geometric mechanics of multibody systems with application to direct optimal control. J. Comput. Nonlinear Dyn. 13(10), 101001 (2018)

    Article  Google Scholar 

  5. Gerdts, M.: Optimal Control of ODEs and DAEs. De Gruyter Textbook. De Gruyter, Berlin (2012)

    Book  Google Scholar 

  6. Ober-Blöbaum, S., Junge, O., Marsden, J.: Discrete mechanics and optimal control: an analysis. ESAIM Control Optim. Calc. Var. 17, 322–352 (2011)

    Article  MathSciNet  Google Scholar 

  7. Wächter, A., Biegler, L.T.: On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming. Math. Program. 106, 25–57 (2006)

    Article  MathSciNet  Google Scholar 

  8. Puterman, M.L.: Markov Decision Processes: Discrete Stochastic Dynamic Programming, 1st edn. Wiley, New York (1994)

    Book  Google Scholar 

  9. Williams, R.J.: Simple statistical gradient-following algorithms for connectionist reinforcement learning. Mach. Learn. 8, 229–256 (1992)

    MATH  Google Scholar 

  10. Schulman, J., Wolski, F., Dhariwal, P., Radford, A., Klimov, O.: Proximal policy optimization algorithms. CoRR, vol. abs/1707.06347 (2017)

    Google Scholar 

  11. Schulman, J., Levine, S., Abbeel, P., Jordan, M.I., Moritz, P.: Trust region policy optimization. In: ICML. JMLR Workshop and Conference Proceedings, Lille, France, vol. 37, pp. 1889–1897. JMLR.org (2015)

    Google Scholar 

  12. Kullback, S.: Information Theory and Statistics. Wiley, New York (1959)

    MATH  Google Scholar 

  13. Kullback, S., Leibler, R.A.: On information and sufficiency. Ann. Math. Statist. 22, 79–86 (1951)

    Article  MathSciNet  Google Scholar 

  14. Abadi, M., Agarwal, A., Barham, P., Brevdo, E., Chen, Z., Citro, C., Corrado, G.S., Davis, A., Dean, J., Devin, M., Ghemawat, S., Goodfellow, I., Harp, A., Irving, G., Isard, M., Jia, Y., Jozefowicz, R., Kaiser, L., Kudlur, M., Levenberg, J., Mané, D., Monga, R., Moore, S., Murray, D., Olah, C., Schuster, M., Shlens, J., Steiner, B., Sutskever, I., Talwar, K., Tucker, P., Vanhoucke, V., Vasudevan, V., Viégas, F., Vinyals, O., Warden, P., Wattenberg, M., Wicke, M., Yu, Y., Zheng, X.: TensorFlow: large-scale machine learning on heterogeneous systems (2015). Software tensorflow.org

  15. Coady, P.: AI gym workout (2017). https://learningai.io/projects/2017/07/28/ai-gym-workout.html. 26 Oct 2018

Download references

Acknowledgements

The authors are grateful for the funding by the Federal Ministry of Education and Research of Germany (BMBF), project number 05M16UKD.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Simon Gottschalk .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Burger, M., Gottschalk, S., Roller, M. (2020). Reinforcement Learning Applied to a Human Arm Model. In: Kecskeméthy, A., Geu Flores, F. (eds) Multibody Dynamics 2019. ECCOMAS 2019. Computational Methods in Applied Sciences, vol 53. Springer, Cham. https://doi.org/10.1007/978-3-030-23132-3_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-23132-3_9

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-23131-6

  • Online ISBN: 978-3-030-23132-3

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics