Advertisement

Object Grasping of Humanoid Robot Based on YOLO

  • Li Tian
  • Nadia Magnenat Thalmann
  • Daniel ThalmannEmail author
  • Zhiwen Fang
  • Jianmin Zheng
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11542)

Abstract

This paper presents a system that aims to achieve autonomous grasping for micro-controller based humanoid robots such as the Inmoov robot [1]. The system consists of a visual sensor, a central controller and a manipulator. We modify the open sourced objection detection software YOLO (You Only Look Once) v2 [2] and associate it with the visual sensor to make the sensor be able to detect not only the category of the target object but also the location with the help of a depth camera. We also estimate the dimensions (i.e., the height and width) of the target based on the bounding box technique (Fig. 1). After that, we send the information to the central controller (a humanoid robot), which controls the manipulator (customised robotic hand) to grasp the object with the help of inverse kinematics theory. We conduct experiments to test our method with the Inmoov robot. The experiments show that our method is capable of detecting the object and driving the robotic hands to grasp the target object.
Fig. 1.

Autonomous grasping with real-time object detection

Keywords

Robotics Vision Object detection Motion control Grasping 

Notes

Acknowledgements

This research is supported by the BeingTogether Centre, a collaboration between Nanyang Technological University (NTU) Singapore and University of North Carolina (UNC) at Chapel Hill. The BeingTogether Centre is supported by the National Research Foundation, Prime Minister’s Office, Singapore under its International Research Centres in Singapore Funding Initiative.

References

  1. 1.
    Langevin, G.: Hand robot InMoov (2016)Google Scholar
  2. 2.
    Redmon, J., Farhadi, A.: YOLO9000: better, faster, stronger. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2017)Google Scholar
  3. 3.
    Han, J., et al.: Advanced deep-learning techniques for salient and category-specific object detection: a survey. IEEE Signal Process. Mag. 35(1), 84–100 (2018)CrossRefGoogle Scholar
  4. 4.
    Girshick, R.: Fast R-CNN. In: Proceedings of the IEEE International Conference on Computer Vision (2015)Google Scholar
  5. 5.
    Girshick, R., et al.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2014)Google Scholar
  6. 6.
    Redmon, J., et al.: You only look once: unified, real-time object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016)Google Scholar
  7. 7.
    Roa, M.A., Suárez, R.: Grasp quality measures: review and performance. Auton. Robots 38(1), 65–88 (2015)CrossRefGoogle Scholar
  8. 8.
    Tikhanoff, V., et al.: Exploring affordances and tool use on the iCub. In: 2013 13th IEEE-RAS International Conference on Humanoid Robots (Humanoids). IEEE (2013)Google Scholar
  9. 9.
    Kurban, R., Skuka, F., Bozpolat, H.: Plane segmentation of kinect point clouds using RANSAC. In: The 7th International Conference on Information Technology (2015)Google Scholar
  10. 10.
    Fischler, M.A., Bolles, R.C.: Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun. ACM 24(6), 381–395 (1981)MathSciNetCrossRefGoogle Scholar
  11. 11.
    Dakarimov, S., et al.: Study on the development and control of humanoid robot arm using MatLab/Arduino. 유공압건설기계학회 학술대회논문집, pp. 88–90 (2018)Google Scholar
  12. 12.
    Thalmann, N.M., Tian, L., Yao, F.: Nadine: a social robot that can localize objects and grasp them in a human way. In: Prabaharan, S.R.S., Thalmann, N.M., Kanchana Bhaaskaran, V.S. (eds.) Frontiers in Electronic Technologies. LNEE, vol. 433, pp. 1–23. Springer, Singapore (2017).  https://doi.org/10.1007/978-981-10-4235-5_1CrossRefGoogle Scholar
  13. 13.
    Tian, L., et al.: The making of a 3D-printed, cable-driven, single-model, lightweight humanoid robotic hand. Front. Robot. AI 4, 65 (2017)CrossRefGoogle Scholar
  14. 14.
    Tian, L., et al.: A methodology to model and simulate customized realistic anthropomorphic robotic hands. In: Proceedings of Computer Graphics International 2018. ACM (2018)Google Scholar
  15. 15.
    Tian, L., et al.: Nature grasping by a cable-driven under-actuated anthropomorphic robotic hand. TELKOMNIKA 17(1), 1–7 (2019)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Nanyang Technological UniversitySingaporeSingapore
  2. 2.EPFL‎LausanneSwitzerland
  3. 3.A*STARSingaporeSingapore

Personalised recommendations