Skip to main content

Human-Like Hand Reaching by Motion Prediction Using Long Short-Term Memory

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 10652))

Abstract

An interaction between a robot and a human could be difficult with only reactive mechanisms, especially in a social interaction, because the robot usually needs time to plan its movement. This paper discusses a motion generation system for humanoid robots to perform interactions with human motion prediction. To learn a human motion, a Long Short-Term Memory is trained using a public dataset. The effectiveness of the proposed technique is demonstrated by performing a handshake with a humanoid robot. Instead of following the human palm, the robot learns to predict the hand-meeting point. By using three metrics namely the smoothness, timeliness, and efficiency of the robot movements, the experimental results of various motion plans are compared. The predictive method shows a balanced trade-off point in all the metrics.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Agravante, D.J., Cherubini, A., Bussy, A., Gergondet, P., Kheddar, A.: Collaborative human-humanoid carrying using vision and haptic sensing. In: 2014 IEEE International Conference on Robotics and Automation (ICRA), pp. 607–612, May 2014

    Google Scholar 

  2. Avraham, G., Nisky, I., Fernandes, H.L., Acuna, D.E., Kording, K.P., Loeb, G.E., Karniel, A.: Toward perceiving robots as humans: three handshake models face the turing-like handshake test. IEEE Trans. Haptics 5(3), 196–207 (2012)

    Article  Google Scholar 

  3. Bütepage, J., Black, M., Kragic, D., Kjellström, H.: Deep representation learning for human motion prediction and classification. arXiv e-prints, February 2017

    Google Scholar 

  4. De-Magistris, G., Micaelli, A., Evrard, P., Savin, J.: A human-like learning control for digital human models in a physics-based virtual environment. Vis. Comput. 31(4), 423–440 (2015)

    Article  Google Scholar 

  5. Ewerton, M., Neumann, G., Lioutikov, R., Amor, H.B., Peters, J., Maeda, G.: Learning multiple collaborative tasks with a mixture of interaction primitives. In: 2015 IEEE International Conference on Robotics and Automation (ICRA), pp. 1535–1542, May 2015

    Google Scholar 

  6. Falahi, M., Shangari, T.A., Sheikhjafari, A., Gharghabi, S., Ahmadi, A., Ghidary, S.S.: Adaptive handshaking between humans and robots, using imitation: based on gender-detection and person recognition. In: 2014 Second RSI/ISM International Conference on Robotics and Mechatronics (ICRoM), pp. 936–941, October 2014

    Google Scholar 

  7. Flash, T., Hogans, N.: The coordination of arm movements: an experimentally confirmed mathematical model. J. Neurosci. 5, 1688–1703 (1985)

    Google Scholar 

  8. Fragkiadaki, K., Levine, S., Felsen, P., Malik, J.: Recurrent network models for human dynamics. In: Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV), pp. 4346–4354. IEEE Computer Society, Washington, DC (2015)

    Google Scholar 

  9. Huang, C.M., Mutlu, B.: Anticipatory robot control for efficient human-robot collaboration. In: 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 83–90, March 2016

    Google Scholar 

  10. Inoue, T., De Magistris, G., Munawar, A., Yokoya, T., Tachibana, R.: Deep reinforcement learning for high precision assembly tasks. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (2017)

    Google Scholar 

  11. Jindai, M., Ota, S., Ikemoto, Y., Sasaki, T.: Handshake request motion model with an approaching human for a handshake robot system. In: 2015 IEEE 7th International Conference on Cybernetics and Intelligent Systems (CIS) and IEEE Conference on Robotics, Automation and Mechatronics (RAM), pp. 265–270, July 2015

    Google Scholar 

  12. Jindai, M., Watanabe, T.: A handshake robot system based on a shake-motion leading model. In: 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 3330–3335, September 2008

    Google Scholar 

  13. Mainprice, J., Berenson, D.: Human-robot collaborative manipulation planning using early prediction of human motion. In: 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 299–306, November 2013

    Google Scholar 

  14. Munawar, A., Vinayavekhin, P., De Magistris, G.: Spatio-temporal anomaly detection for industrial robots through prediction in unsupervised feature space. In: 2017 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 1017–1025, March 2017

    Google Scholar 

  15. Pham, T.H., Bufort, A., Caron, S., Kheddar, A.: Whole-body contact force sensing from motion capture. In: 2016 IEEE/SICE International Symposium on System Integration (SII), pp. 58–63. IEEE (2016)

    Google Scholar 

  16. Rohrmuller, F., Althoff, M., Wollherr, D., Buss, M.: Probabilistic mapping of dynamic obstacles using markov chains for replanning in dynamic environments. In: 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 2504–2510, September 2008

    Google Scholar 

  17. Shahroudy, A., Liu, J., Ng, T.T., Wang, G.: NTU RGB+D: a large scale dataset for 3D human activity analysis. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016

    Google Scholar 

  18. Srivastava, N., Mansimov, E., Salakhudinov, R.: Unsupervised learning of video representations using LSTMs. In: Proceedings of the 32nd International Conference on Machine Learning, vol. 37, pp. 843–852. PMLR, Lille, 7–9 July 2015

    Google Scholar 

  19. Takamatsu, J., Morita, T., Ogawara, K., Kimura, H., Ikeuchi, K.: Representation for knot-tying tasks. IEEE Trans. Robot. 22(1), 65–78 (2006)

    Article  Google Scholar 

  20. Wang, Q., Kurillo, G., Ofli, F., Bajcsy, R.: Evaluation of pose tracking accuracy in the first and second generations of Microsoft Kinect. In: 2015 International Conference on Healthcare Informatics, pp. 380–389, October 2015

    Google Scholar 

  21. Wang, Z., Giannopoulos, E., Slater, M., Peer, A., Buss, M.: Handshake: realistic human-robot interaction in haptic enhanced virtual reality. Presence Teleoper. Virtual Environ. 20(4), 371–392 (2011)

    Article  Google Scholar 

  22. Zeng, Y., Li, Y., Xu, P., Ge, S.S.: Human-robot handshaking: a hybrid deliberate/reactive model. In: Ge, S.S., Khatib, O., Cabibihan, J.-J., Simmons, R., Williams, M.-A. (eds.) ICSR 2012. LNCS, vol. 7621, pp. 258–267. Springer, Heidelberg (2012). doi:10.1007/978-3-642-34103-8_26

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Phongtharin Vinayavekhin .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Vinayavekhin, P. et al. (2017). Human-Like Hand Reaching by Motion Prediction Using Long Short-Term Memory. In: Kheddar, A., et al. Social Robotics. ICSR 2017. Lecture Notes in Computer Science(), vol 10652. Springer, Cham. https://doi.org/10.1007/978-3-319-70022-9_16

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-70022-9_16

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-70021-2

  • Online ISBN: 978-3-319-70022-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics