Advertisement

Human Motion Imitation

  • Dana Kulić
Reference work entry

Abstract

Reproducing human behaviors and movement has long been the inspiration behind humanoid robotics research. This chapter introduces the field of imitation learning, and the key methods for observing and modeling human movement, and reproducing it by humanoid robots. A key challenge when reproducing human movement is to account for the differences in the kinematic and dynamic properties between the human and the robot and ensure that the robot maintains postural stability while reproducing the movement as closely as possible. This chapter overviews the typical data processing pipeline, starting from human observation and progressing through motion modeling, and reproduction. Next, the various approaches for modeling human movement, adapting it to ensure feasibility and controlling the robot to replicate the desired movement are overviewed. This chapter ends with a discussion of open problems and current research directions.

References

  1. 1.
    P. Abbeel, A.Y. Ng, Apprenticeship learning via inverse reinforcement learning, in Proceedings of the Twenty-First International Conference on Machine learning (ACM, 2004)Google Scholar
  2. 2.
    A. Alissandrakis, C.L. Nehaniv, K. Dautenhahn, Imitation with ALICE: learning to imitate corresponding actions across dissimilar embodiments. IEEE Trans. Syst. Man Cybern. Syst. Hum. 32(4), 482–496 (2002)CrossRefGoogle Scholar
  3. 3.
    B. Argall, S. Chernova, M. Veloso, B. Browning, A survey of robot learning from demonstration. Robot. Auton. Syst. 57, 469–483 (2009)CrossRefGoogle Scholar
  4. 4.
    A. Billard, S. Calinon, R. Dillmann, S. Schaal, Robot programming by demonstration, in Handbook of Robotics (Springer Berlin Heidelberg, 2008), pp. 1371–1394CrossRefGoogle Scholar
  5. 5.
    C. Breazeal, B. Scassellati, Robots that imitate humans. Trends Cogn. Sci. 6(11), 481–487 (2002)CrossRefGoogle Scholar
  6. 6.
    R. Chalodhorn, D.B. Grimes, K. Grochow, R.P.N. Rao. Learning to walk through imitation, in IJCAI, vol. 7, 2007, pp. 2084–2090Google Scholar
  7. 7.
    G.J. Chang, D. Kulić, Robot task learning from demonstration using petri nets, in Proceedings of the IEEE International Symposium on Robot and Human Interactive Communication, 2013, pp. 31–36Google Scholar
  8. 8.
    J.B. Cole, D.B. Grimes, R.P.N. Rao, Learning full-body motions from monocular vision: dynamic imitation in a humanoid robot, in Proceedings of the IEEE/RJS International Conference on Intelligent Robots and Systems, 2007, pp. 240–246Google Scholar
  9. 9.
    B. Dariush, M. Gienger, B. Jian, C. Goerick, K. Fujimura, Whole body humanoid control from human motion descriptors, in Proceedings of the IEEE International Conference on Robotics and Automation, 2008, pp. 2677–2684Google Scholar
  10. 10.
    B. Dariush, M. Gienger, A. Arumbakkam, Y. Zhu, B. Jian, K. Fujimura, C. Goerick, Online transfer of human motion to humanoids. Int. J. Humanoid Robot. 6(2), 265–289 (2009)CrossRefGoogle Scholar
  11. 11.
    A. Gams, B. Nemec, A.J. Ijspeert, A. Ude, Coupling movement primitives: interaction with the environment and bimanual tasks. IEEE Trans. Robot. 30(4), 816–830 (2014)CrossRefGoogle Scholar
  12. 12.
    A. Gams, J. van den Kieboom, F. Dzeladini, A. Ude, A.J. Ijspeert, Real-time full body motion imitation on the COMAN humanoid robot. Robotica 33, 1049–1061 (2015)CrossRefGoogle Scholar
  13. 13.
    M. González-Fierro, C. Balaguer, N. Swann, T. Nanayakkara, Full-body postural control of a humanoid robot with both imitation learning and skill innovation. Int. J. Humanoid Robot. 11(02), 1450012 (2014)CrossRefGoogle Scholar
  14. 14.
    D.B. Grimes, R. Chalodhorn, R.P.N. Rao, Dynamic imitation in a humanoid robot through nonparametric probabilistic inference, in Proceedings of Robotics: Science and Systems, 2006Google Scholar
  15. 15.
    F. Guenter, A.G. Billard, Using reinforcement learning to adapt an imitation task, in Proceedings of the IEEE/RJS International Conference on Intelligent Robots and Systems, 2007, pp. 1022–1027Google Scholar
  16. 16.
    K. Harada, K. Miura, M. Morisawa, K. Kaneko, S. Nakaoka, F. Kanehiro, T. Tsuji, S. Kajita, Toward human-like walking pattern generator, in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, 2009, pp. 1071–1077Google Scholar
  17. 17.
    M. Howard, S. Klanke, M. Gienger, C. Goerick, S. Vijayakumar, Learning potential-based policies from constrained motion, in Proceedings of the IEEE International Conference on Humanoid Robots, 2008, pp. 714–720Google Scholar
  18. 18.
    M. Howard, D.J. Braun, S. Vijayakumar, Transferring human impedance behavior to heterogeneous variable impedance actuators. IEEE Trans. Robot. 29(4), 847–862 (2013)CrossRefGoogle Scholar
  19. 19.
    A.J. Ijspeert, J. Nakanishi, S. Schaal, Learning rhythmic movements by demonstration using nonlinear oscillators, in Proceedings of the IEEE International Conference on Intelligent Robots and Systems, 2002, pp. 958–963Google Scholar
  20. 20.
    A.J. Ijspeert, J. Nakanishi, S. Schaal, Movement imitation with nonlinear dynamical systems in humanoid robots, in Proceedings of the IEEE International Conference on Robotics and Automation, 2002, pp. 1398–1403Google Scholar
  21. 21.
    A.J. Ijspeert, J. Nakanishi, H. Hoffmann, P. Pastor, S. Schaal, Dynamical movement primitives: learning attractor models for motor behaviors. Neural Comput. 25, 328–373 (2013)MathSciNetCrossRefGoogle Scholar
  22. 22.
    T. Inamura, I. Toshima, H. Tanie, Y. Nakamura, Embodied symbol emergence based on mimesis theory. Int. J. Robot. Res. 23(4–5), 363–377 (2004)CrossRefGoogle Scholar
  23. 23.
    M. Karg, A.-A. Samadani, R. Gorbet, K. Kuhnlenz, J. Hoey, D. Kulic, Body movements for affective expression: a survey of automatic recognition and generation. IEEE Trans. Affect. Comput. 4(4), 341–359 (2013)Google Scholar
  24. 24.
    J.-Y. Kim, Y.-S. Kim, Whole-body motion generation of android robot using motion capture and nonlinear constrained optimization. Int. J. Humanoid Robot. 10(02), 1350003 (2013)CrossRefGoogle Scholar
  25. 25.
    J. Kober, J.A. Bagnell, J. Peters, Reinforcement learning in robotics: a survey. Int. J. Robot. Res. 21(11), 1238–1274 (2013)CrossRefGoogle Scholar
  26. 26.
    J. Koenemann, F. Burget, M. Bennewitz, Real-time imitation of human whole-body motions by humanoids, in 2014 IEEE International Conference on Robotics and Automation (ICRA), 2014, pp. 2806–2812Google Scholar
  27. 27.
    D. Kulić, W. Takano, Y. Nakamura, Incremental learning, clustering and hierarchy formation of whole body motion patterns using adaptive hidden Markov chains. Int. J. Robot. Res. 27(7), 761–784 (2008)Google Scholar
  28. 28.
    D. Kulić, W. Takano, Y. Nakamura, On-line segmentation and clustering from continuous observation of whole body motions. IEEE Trans. Robot. 25(5), 1158–1166 (2009)CrossRefGoogle Scholar
  29. 29.
    D. Kulić, C. Ott, D. Lee, J. Ishikawa, Y. Nakamura, Incremental learning of full body motion primitives and their sequencing through human motion observation. Int. J. Robot. Res. 31(3), 330–345 (2012)CrossRefGoogle Scholar
  30. 30.
    D. Kulic, M. Choudry, G. Venture, K. Miura, E. Yoshida, Quantitative human and robot motion comparison for enabling assistive device evaluation, in 2013 13th IEEE-RAS International Conference on Humanoid Robots (Humanoids), 2013, pp. 196–202Google Scholar
  31. 31.
    D. Kulić, G. Venture, K. Yamane, E. Demircan, I. Mizuuchi, K. Mombaur, Anthropomorphic movement analysis and synthesis: a survey of methods and applications. IEEE Trans. Robot. 32(4) 776–795 (2016)CrossRefGoogle Scholar
  32. 32.
    K. Lee, Y. Su, T.-K. Kim, Y. Demiris, A syntactic approach to robot imitation learning using probabilistic activity grammars. Robot. Auton. Syst. 61(12), 1323–1334 (2013)CrossRefGoogle Scholar
  33. 33.
    H.C. Lin, M. Howard, S. Vijayakumar, Learning null space projections, in IEEE International Conference on Robotics and Automation, 2015, pp. 2613–2619Google Scholar
  34. 34.
    C. Ott, D. Lee, Y. Nakamura, Motion capture based human motion recognition and imitation by direct marker control, in Proceedings of the IEEE International Conference on Humanoid Robots, 2008, pp. 399–405Google Scholar
  35. 35.
    K. Miura, E. Yoshida, Y. Kobayashi, Y. Endo, F. Kanehioro, K. Homma, I. Kajitani, Y. Matsumoto, T. Tanaka, Humanoid robot as an evaluator of assistive devices, in IEEE International Conference on Robotics and Automation, 2013, pp. 671–677Google Scholar
  36. 36.
    K. Mombaur, A. Truong, J.P. Laumond, From human to humanoid locomotion – an inverse optimal control approach. Auton. Robot. 28, 369–383 (2010)CrossRefGoogle Scholar
  37. 37.
    K. Mombaur, A.-H. Olivier, A. Crétual, Forward and inverse optimal control of bipedal running, in Modeling, Simulation and Optimization of Bipedal Walking, ed. by K. Mombaur, K. Berns. Cognitive Systems Monographs, vol. 18 (Springer, Berlin/Heidelberg, 2013), pp. 165–179Google Scholar
  38. 38.
    Y. Nakamura, K. Yamane, Dynamics computation of structure-varying kinematic chains and its application to human figures. IEEE Trans. Robot. Autom. 16(2), 124–134 (2000)CrossRefGoogle Scholar
  39. 39.
    J. Nakanishi, J. Morimoto, G. Endo, G. Cheng, S. Schaal, M. Kawato, Learning from demonstration and adaptation of biped locomotion. J. Robot. Auton. Syst. 47, 79–91 (2004)CrossRefGoogle Scholar
  40. 40.
    S. Nakaoka, A. Nakazawa, K. Yokoi, K. Ikeuchi, Leg motion primitives for a humanoid robot to imitate human dances. J. Three Dimens. Images 18(1), 73–78 (2004)Google Scholar
  41. 41.
    A. Nakazawa, S. Nakaoka, T. Shiratori, K. Ikeuchi, Analysis and synthesis of human dance motions, in Proceedings of IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, MFI2003 (IEEE, 2003), pp. 83–88Google Scholar
  42. 42.
    S. Nakoka, A. Nakazawa, K. Yokoi, H. Hirukawa, K. Ikeuchi, Generating whole body motions for a biped humanoid robot from captured human dances, in Proceedings of the IEEE International Conference on Robotics and Automation, 2003, pp. 3905–3910Google Scholar
  43. 43.
    T. Okamoto, T. Shiratori, S. Kudoh, S. Nakaoka, K. Ikeuchi, Toward a dancing robot with listening capability: keypose-based integration of lower-, middle-, and upper-body motions for varying music tempos. Trans. Robot. 30(3), 771–778 (2014)CrossRefGoogle Scholar
  44. 44.
    J. Peters, S. Schaal, Natural actor-critic. Neurocomputing 71(7), 1180–1190 (2008)CrossRefGoogle Scholar
  45. 45.
    J. Peters, S. Schaal, Reinforcement learning of motor skills with policy gradients. Neural netw. 21(4), 682–697 (2008)CrossRefGoogle Scholar
  46. 46.
    K. Ramirez-Amaro, M. Beetz, G. Cheng, Understanding the intention of human activities through semantic perception: observation, understanding and execution on a humanoid robot. Adv. Robot. 29(5), 345–362 (2015)CrossRefGoogle Scholar
  47. 47.
    L. Rozo, S. Calinon, D.G. Caldwell, P. Jimenez, C. Torras, Learning physical collaborative robot behaviors from human demonstrations. IEEE Trans. Robot. 32(3), 513–527 (2016)CrossRefGoogle Scholar
  48. 48.
    S. Schaal, A. Ijspeert, A. Billard, Computational approaches to motor learning by imitation. Philos. Trans. R. Soc. Lond. B: Biol. Sci. 358, 537–547 (2003)CrossRefGoogle Scholar
  49. 49.
    L. Sentis, J. Park, O. Khatib, Modeling and control of multi-contact centers of pressure and internal forces in humanoid robots, in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, 2009, pp. 453–460Google Scholar
  50. 50.
    M. Sreenivasa, P. Souères, J.-P. Laumond, Walking to grasp: modeling of human movements as invariants and an application to humanoid robotics. IEEE Trans. Syst. Man Cybern. Syst. Hum. 42(4), 880–893 (2012)CrossRefGoogle Scholar
  51. 51.
    T. Sugihara, Y. Nakamura, A fast online gait planning with boundary condition relaxation for humanoid robots, in Proceedings of the IEEE International Conference on Robotics and Automation, 2005, pp. 306–311Google Scholar
  52. 52.
    N. Sylla, V. Bonnet, G. Venture, N. Armande, P. Fraisse, Human arm optimal motion analysis in industrial screwing task, in IEEE/RAS-EMBS International Conference on Biomedical Robotics and Biomechatronics, 2014, pp. 964–969Google Scholar
  53. 53.
    W. Takano, K. Yamane, Y. Nakamura, Primitive communication of humanoid robot with human via hierarchical mimesis model on the proto symbol space, in Proceedings of the IEEE/RAS International Conference on Humanoid Robots, 2005, pp. 167–174Google Scholar
  54. 54.
    W. Takano, K. Yamane, T. Sugihara, K. Yamamoto, Y. Nakamura, Primitive communication based on motion recognition and generation with hierarchical mimesis model, in Proceedings of the IEEE International Conference on Robotics and Automation, 2006, pp. 3602–3608Google Scholar
  55. 55.
    G.W. Taylor, G.E. Hinton, S. Roweis, Modeling human motion using binary latent variables, in Proceedings of the Conference on Neural Information Processing Systems, 2006, pp. 1345–1352Google Scholar
  56. 56.
    R. Vuga, M. Ogrinc, A. Gams, T. Petric, N. Sugimoto, A. Ude, J. Morimoto, Motion capture and reinforcement learning of dynamically stable humanoid movement primitives, in 2013 IEEE International Conference on Robotics and Automation (ICRA), 2013, pp. 5284–5290Google Scholar
  57. 57.
    K. Yamane, Y. Nakamura, Dynamics filter – concept and implementation of online motion generator for human figures. IEEE Trans. Robot. Autom. 19(3), 421–432 (2003)CrossRefGoogle Scholar
  58. 58.
    K. Yamane, Y. Nakamura, Natural motion animation through constraining and deconstraining at will. IEEE Trans. Vis. Comput. Graph. 9(3), 352–360 (2003)CrossRefGoogle Scholar
  59. 59.
    Y. Zheng, K. Yamane, Human motion tracking control with strict contact force constraints for floating-base humanoid robots, in IEEE-RAS International Conference on Humanoid Robots, 2013, pp. 34–41Google Scholar

Copyright information

© Springer Nature B.V. 2019

Authors and Affiliations

  1. 1.Department of Electrical and Computer EngineeringUniversity of WaterlooWaterlooCanada

Personalised recommendations