Plane-Based Humanoid Robot Navigation and Object Model Construction for Grasping
Abstract
In this work we present an approach to humanoid robot navigation and object model construction for grasping using only RGB-D data from an onboard depth sensor. A plane-based representation is used to provide a high-level model of the workspace, to estimate both the global robot pose and pose with respect to the object, and to determine the object pose as well as its dimensions. A visual feedback is used to achieve the desired robot pose for grasping. In the pre–grasping pose the robot determines the object pose as well as its dimensions. In such a local grasping approach, a simulator with our high-level scene representation and a virtual camera is used to fine-tune the motion controllers as well as to simulate and validate the process of grasping. We present experimental results that were obtained in simulations with virtual camera and robot as well as with real humanoid robot equipped with RGB-D camera, which performed object grasping in low-texture layouts.
Keywords
Object grasping Humanoid robot Pose recoveryNotes
Acknowledgment
This work was supported by Polish National Science Center (NCN) under research grants 2014/15/B/ST6/02808 and 2017/27/B/ST6/01743.
References
- 1.Bohg, J., Morales, A., Asfour, T., Kragic, D.: Data-driven grasp synthesis - a survey. IEEE Trans. Robot. 30(2), 289–309 (2014)CrossRefGoogle Scholar
- 2.Bresson, G., Alsayed, Z., Yu, L., Glaser, S.: Simultaneous localization and mapping: a survey of current trends in autonomous driving. IEEE Trans. Intell. Veh. 2(3), 194–220 (2017)CrossRefGoogle Scholar
- 3.Miller, A.T., Allen, P.K.: Graspit! A versatile simulator for robotic grasping. IEEE Robot. Autom. Mag. 11(4), 110–122 (2004)CrossRefGoogle Scholar
- 4.Kragic, D., Miller, A.T., Allen, P.K.: Real-time tracking meets online grasp planning. In: Proceedings of IEEE International Conference on Robotics and Automation, vol. 3, pp. 2460–2465 (2001)Google Scholar
- 5.Wongwilai, N., Niparnan, N., Sudsang, A.: SLAM-based grasping framework for robotic arm navigation and object model construction. In: IEEE International Conference on Cyber Technology in Automation, Control and Intelligent, pp. 156–161 (2014)Google Scholar
- 6.Jiang, Y., Moseson, S., Saxena, A.: Efficient grasping from RGBD images: learning using a new rectangle representation. In: IEEE International Conference on Robotics and Automation, pp. 3304–3311 (2011)Google Scholar
- 7.Lin, Y.C., Wei, S.T., Fu, L.C.: Grasping unknown objects using depth gradient feature with eye-in-hand RGB-D sensor. In: IEEE International Conference on Automation Science and Engineering (CASE), pp. 1258–1263 (2014)Google Scholar
- 8.Ala, R., Kim, D.H., Shin, S.Y., Kim, C., Park, S.K.: A 3D-grasp synthesis algorithm to grasp unknown objects based on graspable boundary and convex segments. Inf. Sci. 295(C), 91–106 (2015)CrossRefGoogle Scholar
- 9.ten Pas, A., Platt, R.: Using geometry to detect grasp poses in 3D point clouds. In: Bicchi, A., Burgard, W. (eds.) Robotics Research. SPAR, vol. 2, pp. 307–324. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-51532-8_19CrossRefGoogle Scholar
- 10.Navarro, S.E., Weiss, D., Stogl, D., Milev, D., Hein, B.: Tracking and grasping of known and unknown objects from a conveyor belt. In: 41st International Symposium on Robotics ISR/Robotik 2014, pp. 1–8 (2014)Google Scholar
- 11.Suzuki, T., Oka, T.: Grasping of unknown objects on a planar surface using a single depth image. In: IEEE International Conference on Advanced Intelligent Mechatronics (AIM), pp. 572–577 (2016)Google Scholar
- 12.Klein, D.A., Illing, B., Gaspers, B., Schulz, D., Cremers, A.B.: Hierarchical salient object detection for assisted grasping. In: IEEE International Conference on Robotics and Automation (ICRA), pp. 2230–2237 (2017)Google Scholar
- 13.Grimson, W., Lozano-Perez, T.: Model-based recognition and localization from sparse range or tactile data. Int. J. Robot. Res. 3(3), 3–35 (1984)CrossRefGoogle Scholar
- 14.Zhang, Z., Faugeras, O.: Determining motion from 3D line segment matches: a comparative study. Image Vis. Comput. 9(1), 10–19 (1991)CrossRefGoogle Scholar
- 15.Umeyama, S.: Least-squares estimation of transformation parameters between two point patterns. IEEE Trans. Pattern Anal. Mach. Intell. 13(4), 376–380 (1991)CrossRefGoogle Scholar
- 16.Chen, H.: Pose determination from line-to-plane correspondences: existence condition and closed-form solutions. IEEE Trans. Pattern Anal. Mach. Intell. 13(6), 530–541 (1991)CrossRefGoogle Scholar
- 17.Nister, D.: A minimal solution to the generalised 3-point pose problem. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 560–567, June 2004Google Scholar
- 18.Ramalingam, S., Taguchi, Y.: A theory of minimal 3D point to 3D plane registration and its generalization. Int. J. Comput. Vis. 102, 73–90 (2012)MathSciNetCrossRefGoogle Scholar
- 19.Cadena, C., et al.: Past, present, and future of simultaneous localization and mapping: toward the robust-perception age. IEEE Trans. Robot. 32(6), 1309–1332 (2016)CrossRefGoogle Scholar
- 20.Khoshelham, K.: Direct 6-DoF pose estimation from point-plane correspondences. In: International Conference on Digital Image Computing: Techniques and Applications (DICTA), pp. 1–6 (2015)Google Scholar
- 21.Michel, O.: Webots: professional mobile robot simulation. J. Adv. Robot. Syst. 1(1), 39–42 (2004)Google Scholar
- 22.Webots. Commercial Mobile Robot Simulation Software. http://www.cyberbotics.com
- 23.Romero-Ramirez, F.J., Muñoz-Salinas, R., Medina-Carnicer, R.: Speeded up detection of squared fiducial markers. Image Vis. Comput. 76, 38–47 (2018)CrossRefGoogle Scholar