Advertisement

Intelligent Service Robotics

, Volume 11, Issue 2, pp 149–169 | Cite as

Humanoids skill learning based on real-time human motion imitation using Kinect

  • Reda Elbasiony
  • Walid Gomaa
Original Research Paper

Abstract

In this paper, a novel framework which enables humanoid robots to learn new skills from demonstration is proposed. The proposed framework makes use of real-time human motion imitation module as a demonstration interface for providing the desired motion to the learning module in an efficient and user-friendly way. This interface overcomes many problems of the currently used interfaces like direct motion recording, kinesthetic teaching, and immersive teleoperation. This method gives the human demonstrator the ability to control almost all body parts of the humanoid robot in real time (including hand shape and orientation which are essential to perform object grasping). The humanoid robot is controlled remotely and without using any sophisticated haptic devices, where it depends only on an inexpensive Kinect sensor and two additional force sensors. To the best of our knowledge, this is the first time for Kinect sensor to be used in estimating hand shape and orientation for object grasping within the field of real-time human motion imitation. Then, the observed motions are projected onto a latent space using Gaussian process latent variable model to extract the relevant features. These relevant features are then used to train regression models through the variational heteroscedastic Gaussian process regression algorithm which is proved to be a very accurate and very fast regression algorithm. Our proposed framework is validated using different activities concerned with both human upper and lower body parts and object grasping also.

Keywords

Imitation learning Humanoid robot Gaussian process latent variable model (GPLVM) Variational heteroscedastic Gaussian process regression (VHGPR) Kinect sensor NAO robot Grasping 

Notes

Acknowledgements

This research has been supported by the Ministry of Higher Education (MoHE) of Egypt through a PhD fellowship. Our sincere thanks to Egypt-Japan University of Science and Technology (E-JUST) for guidance and support.

References

  1. 1.
    Akgun B, Cakmak M, Yoo JW, Thomaz AL (2012) Trajectories and keyframes for kinesthetic teaching: A human-robot interaction perspective. In: Proceedings of the seventh annual ACM/IEEE international conference on human–robot interaction. ACM, pp 391–398Google Scholar
  2. 2.
    Aldebaran (2015) http://doc.aldebaran.com/1-14/. Accessed on 1 May 2017
  3. 3.
    Billard A, Calinon S, Dillmann R, Schaal S (2008) Robot programming by demonstration. In: Siciliano B, Khatib O (eds) Handbook of Robotics, chap 59. Springer, New York, pp 1371–1394CrossRefGoogle Scholar
  4. 4.
    Billard A, Grollman D (2013) Robot learning by demonstration. Scholarpedia 8(12):3824CrossRefGoogle Scholar
  5. 5.
    Calinon S, Billard A (2005) Recognition and reproduction of gestures using a probabilistic framework combining PCA, ICA and HMM. In: Proceedings of the 22nd international conference on machine learning. ACM, pp 105–112Google Scholar
  6. 6.
    Calinon S, D’halluin F, Sauser EL, Caldwell DG, Billard AG (2010) Learning and reproduction of gestures by imitation. IEEE Robot Autom Mag 17(2):44–54CrossRefGoogle Scholar
  7. 7.
    Calinon S, Guenter F, Billard A (2007) On learning, representing, and generalizing a task in a humanoid robot. IEEE Trans Syst Man Cybern Part B Cybern 37(2):286–298CrossRefGoogle Scholar
  8. 8.
    Camps-Valls G, Gómez-Chova L, Muñoz-Marí J, Lázaro-Gredilla M, Verrelst J (2013) simpleR: a simple educational matlab toolbox for statistical regression. https://www.uv.es/gcamps/code/simpler-2-1.zip. Accessed 30 Apr 2017
  9. 9.
    Cela A, Yebes JJ, Arroyo R, Bergasa LM, Barea R, López E (2013) Complete low-cost implementation of a teleoperated control system for a humanoid robot. Sensors 13(2):1385–1401CrossRefGoogle Scholar
  10. 10.
    Chalodhorn R, Grimes DB, Grochow K, Rao RP (2007) Learning to walk through imitation. IJCAI 7:2084–2090Google Scholar
  11. 11.
    Chen N, Chew CM, Tee KP, Han BS (2012) Human-aided robotic grasping. In: RO-MAN, 2012 IEEE. IEEE, pp 75–80Google Scholar
  12. 12.
    Dariush B, Gienger M, Arumbakkam A, Zhu Y, Jian B, Fujimura K, Goerick C (2009) Online transfer of human motion to humanoids. Int J Hum Robot 6(02):265–289CrossRefGoogle Scholar
  13. 13.
    Ekvall S, Kragic D (2006) Learning task models from multiple human demonstrations. In: The 15th IEEE international symposium on robot and human interactive communication, 2006. ROMAN 2006. IEEE, pp 358–363Google Scholar
  14. 14.
    Evrard P, Gribovskaya E, Calinon S, Billard A, Kheddar A (2009) Teaching physical collaborative tasks: object-lifting case study with a humanoid. In: 9th IEEE-RAS international conference on humanoid robots, 2009. Humanoids 2009. IEEE, pp 399–404Google Scholar
  15. 15.
    Han J, Shao L, Xu D, Shotton J (2013) Enhanced computer vision with microsoft kinect sensor: a review. IEEE Trans Cybern 43(5):1318–1334CrossRefGoogle Scholar
  16. 16.
    Khansari-Zadeh SM, Billard A (2011) Learning stable nonlinear dynamical systems with Gaussian mixture models. IEEE Trans Robot 27(5):943–957CrossRefGoogle Scholar
  17. 17.
    Koenemann J, Burget F, Bennewitz M (2014) Real-time imitation of human whole-body motions by humanoids. In: 2014 IEEE international conference on robotics and automation (ICRA). IEEE, pp 2806–2812Google Scholar
  18. 18.
    Kulić D, Takano W, Nakamura Y (2008) Incremental learning, clustering and hierarchy formation of whole body motion patterns using adaptive hidden Markov chains. Int J Robot Res 27(7):761–784CrossRefGoogle Scholar
  19. 19.
    Kuniyoshi Y, Inaba M, Inoue H (1994) Learning by watching: extracting reusable task knowledge from visual observation of human performance. IEEE Trans Robot Autom 10(6):799–822CrossRefGoogle Scholar
  20. 20.
    Lawrence N (2005) Probabilistic non-linear principal component analysis with Gaussian process latent variable models. J Mach Learn Res 6:1783–1816MathSciNetMATHGoogle Scholar
  21. 21.
    Lawrence ND (2004) Gaussian process latent variable models for visualisation of high dimensional data. Adv Neural Inf Process Syst 16(3):329–336Google Scholar
  22. 22.
    Lazaro-Gredilla M, Titsias M (2011) Variational heteroscedastic Gaussian process regression. In: Getoor L, Scheffer T (eds) Proceedings of the 28th international conference on machine learning (ICML-11), ICML ’11. ACM, New York, pp 841–848Google Scholar
  23. 23.
    Lei J, Song M, Li ZN, Chen C (2015) Whole-body humanoid robot imitation with pose similarity evaluation. Signal Process 108:136–146CrossRefGoogle Scholar
  24. 24.
    Luo RC, Shih BH, Lin TW (2013) Real time human motion imitation of anthropomorphic dual arm robot based on Cartesian impedance control. In: 2013 IEEE international symposium on robotic and sensors environments (ROSE. IEEE), pp 25–30Google Scholar
  25. 25.
    Maaten Lvd, Hinton G (2008) Visualizing data using t-SNE. J Mach Learn Res 9(Nov):2579–2605MATHGoogle Scholar
  26. 26.
    Microsoft (2017) https://www.xbox.com/en-US/xbox-one. Accessed on 1 May 2017
  27. 27.
    Mülling K, Kober J, Kroemer O, Peters J (2013) Learning to select and generalize striking movements in robot table tennis. Int J Robot Res 32(3):263–279CrossRefGoogle Scholar
  28. 28.
    Nakaoka S, Nakazawa A, Kanehiro F, Kaneko K, Morisawa M, Hirukawa H, Ikeuchi K (2007) Learning from observation paradigm: leg task models for enabling a biped humanoid robot to imitate human dances. Int J Robot Res 26(8):829–844CrossRefGoogle Scholar
  29. 29.
    Nakaoka S, Nakazawa A, Yokoi K, Hirukawa H, Ikeuchi K (2003) Generating whole body motions for a biped humanoid robot from captured human dances. In: IEEE international conference on robotics and automation, 2003. Proceedings. ICRA’03, vol 3. IEEE, pp 3905–3910Google Scholar
  30. 30.
    Nguyen-Tuong D, Peters J (2011) Model learning for robot control: a survey. Cognit Process 12(4):319–340CrossRefGoogle Scholar
  31. 31.
    Ott C, Lee D, Nakamura Y (2008) Motion capture based human motion recognition and imitation by direct marker control. In: 8th IEEE-RAS international conference on humanoid robots, 2008. Humanoids 2008. IEEE, pp 399–405Google Scholar
  32. 32.
    Ou Y, Hu J, Wang Z, Fu Y, Wu X, Li X (2015) A real-time human imitation system using kinect. Int J Soc Robot 7(5):587–600CrossRefGoogle Scholar
  33. 33.
    Pardowitz M, Knoop S, Dillmann R, Zollner RD (2007) Incremental learning of tasks from user demonstrations, past experiences, and vocal comments. IEEE Trans Syst Man Cybern Part B Cybern 37(2):322–332CrossRefGoogle Scholar
  34. 34.
    Peternel L, Babic J (2013) Humanoid robot posture-control learning in real-time based on human sensorimotor learning ability. In: 2013 IEEE international conference on robotics and automation (ICRA), pp 5329–5334Google Scholar
  35. 35.
    Quirion S, Duchesne C, Laurendeau D, Marchand M (2008) Comparing GPLVM approaches for dimensionality reduction in character animation. J WSCG 16(1–3):41–48Google Scholar
  36. 36.
    Ramos OE, Saab L, Hak S, Mansard N (2011) Dynamic motion capture and edition using a stack of tasks. In: 2011 11th IEEE-RAS international conference on humanoid robots (humanoids). IEEE, pp 224–230Google Scholar
  37. 37.
    Riley M, Ude A, Wade K, Atkeson CG (2003) Enabling real-time full-body imitation: a natural way of transferring human movement to humanoids. In: IEEE international conference on robotics and automation, 2003. Proceedings. ICRA’03, vol. 2. IEEE, pp 2368–2374Google Scholar
  38. 38.
    Shon AP, Grochow K, Rao RP (2005) Robotic imitation from human motion capture using Gaussian processes. In: 2005 5th IEEE-RAS international conference on humanoid robots. IEEE, pp 129–134Google Scholar
  39. 39.
    Stanton C, Bogdanovych A, Ratanasena E (2012) Teleoperation of a humanoid robot using full-body motion capture, example movements, and machine learning. In: Proceedings of Australasian conference on robotics and automationGoogle Scholar
  40. 40.
    Suleiman W, Yoshida E, Kanehiro F, Laumond JP, Monin A (2008) On human motion imitation by humanoid robot. In: IEEE international conference on robotics and automation, 2008. ICRA 2008. IEEE, pp 2697–2704Google Scholar
  41. 41.
    Titsias M, Lawrence N (2010) Bayesian Gaussian process latent variable model. In: Teh YW, Titterington DM (eds) Proceedings of the 13th international workshop on artificial intelligence and statistics, vol 9. JMLR W&CP, Chia Laguna Resort, Sardinia, Italy, pp 844–851Google Scholar
  42. 42.
    Ude A, Atkeson CG, Riley M (2004) Programming full-body movements for humanoid robots by observation. Robot Auton Syst 47(2):93–108CrossRefGoogle Scholar
  43. 43.
    Yamane K, Hodgins J (2009) Simultaneous tracking and balancing of humanoid robots for imitating human motion capture data. In: IEEE/RSJ international conference on intelligent robots and systems, 2009. IROS 2009. IEEE, pp 2510–2517Google Scholar

Copyright information

© Springer-Verlag GmbH Germany, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Cyber-Physical Systems LabEgypt-Japan University of Science and Technology (E-JUST)New Borg El-Arab City, AlexandriaEgypt
  2. 2.Faculty of EngineeringTanta UniversityTantaEgypt
  3. 3.Faculty of EngineeringAlexandria UniversityAlexandriaEgypt

Personalised recommendations