Skip to main content

Object Grasping of Humanoid Robot Based on YOLO

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 11542))

Abstract

This paper presents a system that aims to achieve autonomous grasping for micro-controller based humanoid robots such as the Inmoov robot [1]. The system consists of a visual sensor, a central controller and a manipulator. We modify the open sourced objection detection software YOLO (You Only Look Once) v2 [2] and associate it with the visual sensor to make the sensor be able to detect not only the category of the target object but also the location with the help of a depth camera. We also estimate the dimensions (i.e., the height and width) of the target based on the bounding box technique (Fig. 1). After that, we send the information to the central controller (a humanoid robot), which controls the manipulator (customised robotic hand) to grasp the object with the help of inverse kinematics theory. We conduct experiments to test our method with the Inmoov robot. The experiments show that our method is capable of detecting the object and driving the robotic hands to grasp the target object.

Autonomous grasping with real-time object detection

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Langevin, G.: Hand robot InMoov (2016)

    Google Scholar 

  2. Redmon, J., Farhadi, A.: YOLO9000: better, faster, stronger. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2017)

    Google Scholar 

  3. Han, J., et al.: Advanced deep-learning techniques for salient and category-specific object detection: a survey. IEEE Signal Process. Mag. 35(1), 84–100 (2018)

    Article  Google Scholar 

  4. Girshick, R.: Fast R-CNN. In: Proceedings of the IEEE International Conference on Computer Vision (2015)

    Google Scholar 

  5. Girshick, R., et al.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2014)

    Google Scholar 

  6. Redmon, J., et al.: You only look once: unified, real-time object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016)

    Google Scholar 

  7. Roa, M.A., Suárez, R.: Grasp quality measures: review and performance. Auton. Robots 38(1), 65–88 (2015)

    Article  Google Scholar 

  8. Tikhanoff, V., et al.: Exploring affordances and tool use on the iCub. In: 2013 13th IEEE-RAS International Conference on Humanoid Robots (Humanoids). IEEE (2013)

    Google Scholar 

  9. Kurban, R., Skuka, F., Bozpolat, H.: Plane segmentation of kinect point clouds using RANSAC. In: The 7th International Conference on Information Technology (2015)

    Google Scholar 

  10. Fischler, M.A., Bolles, R.C.: Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 24(6), 381–395 (1981)

    Article  MathSciNet  Google Scholar 

  11. Dakarimov, S., et al.: Study on the development and control of humanoid robot arm using MatLab/Arduino. 유공압건설기계학회 학술대회논문집, pp. 88–90 (2018)

    Google Scholar 

  12. Thalmann, N.M., Tian, L., Yao, F.: Nadine: a social robot that can localize objects and grasp them in a human way. In: Prabaharan, S.R.S., Thalmann, N.M., Kanchana Bhaaskaran, V.S. (eds.) Frontiers in Electronic Technologies. LNEE, vol. 433, pp. 1–23. Springer, Singapore (2017). https://doi.org/10.1007/978-981-10-4235-5_1

    Chapter  Google Scholar 

  13. Tian, L., et al.: The making of a 3D-printed, cable-driven, single-model, lightweight humanoid robotic hand. Front. Robot. AI 4, 65 (2017)

    Article  Google Scholar 

  14. Tian, L., et al.: A methodology to model and simulate customized realistic anthropomorphic robotic hands. In: Proceedings of Computer Graphics International 2018. ACM (2018)

    Google Scholar 

  15. Tian, L., et al.: Nature grasping by a cable-driven under-actuated anthropomorphic robotic hand. TELKOMNIKA 17(1), 1–7 (2019)

    Article  Google Scholar 

Download references

Acknowledgements

This research is supported by the BeingTogether Centre, a collaboration between Nanyang Technological University (NTU) Singapore and University of North Carolina (UNC) at Chapel Hill. The BeingTogether Centre is supported by the National Research Foundation, Prime Minister’s Office, Singapore under its International Research Centres in Singapore Funding Initiative.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Daniel Thalmann .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Tian, L., Thalmann, N.M., Thalmann, D., Fang, Z., Zheng, J. (2019). Object Grasping of Humanoid Robot Based on YOLO. In: Gavrilova, M., Chang, J., Thalmann, N., Hitzer, E., Ishikawa, H. (eds) Advances in Computer Graphics. CGI 2019. Lecture Notes in Computer Science(), vol 11542. Springer, Cham. https://doi.org/10.1007/978-3-030-22514-8_47

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-22514-8_47

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-22513-1

  • Online ISBN: 978-3-030-22514-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics