Abstract
The ability to grasp is a fundamental requirement for service robots in order to perform meaningful tasks in ordinary environments. However, its robustness can be compromised by the inaccuracy (or lack) of tactile and proprioceptive sensing, especially in the presence of unforeseen slippage. As a solution, vision can be instrumental in detecting grasp errors. In this paper, we present an RGB-D visual application for discerning the success or failure in robot grasping of unknown objects, when a poor proprioceptive information and/or a deformable gripper without tactile information is used. The proposed application is divided into two stages: the visual gripper detection and recognition, and the grasping assessment (i.e. checking whether a grasping error has occurred). For that, three different visual cues are combined: colour, depth and edges. This development is supported by the experimental results on the Hobbit robot which is provided with an elastically deformable gripper.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Johansson, R.S., Flanagan, J.R.: Coding and use of tactile signals from the fingertips in object manipulation tasks. Nat. Rev. Neurosci. 10, 345–359 (2009)
Srinivasan, M.A., Whitehouse, J.M., LaMotte, R.H.: Tactile detection of slip: surface microgeometry and peripheral neural codes. J Neurophysiol. 63(6), 1323–1332 (1990)
Cutkosky, M.R., Howe, R.D., Provancher, W.R.: Force and tactile sensors. In: Siciliano, B., Khatib, O. (eds.) Springer Handbook of Robotics. Springer, New York (2008)
Dahiya, R.S., Metta, G., Valle, M., Sandini, G.: Tactile sensing—from humans to humanoids. IEEE Trans. Robot. 26(1) (2010)
Lee, M.H.: Tactile sensing: new directions, new challenges. Int. J. Robot. Res. 19(7), 636–643 (2000)
Prats, M., del Pobil, A.P., Sanz, P.J.: Robot Physical Interaction Through the Combination of Vision, Tactile and Force Feedback. Springer Tracts in Advanced Robotics, vol. 84(2013)
Haidacher, S., Hirzinger, G.: Contact Point Identification in Multi-Fingered Grasps Exploiting Kinematic Constraints, pp. 1597–1603. ICRA (2002)
Dupont, P., Schulteis, T., Millman, P., Howe, R.D.: Automatic identification of environment haptic properties. Presence: Teleoperators Virtual Environ. 8(4), 392–409 (1999)
Deckers, P., Dollar, A.M., Howe, R.D.: Guiding grasping with proprioception and Markov models. In: Workshop on Robot Manipulation: Sensing and Adapting to the Real World (2007)
Johnsson, M., Balkenius, C.: Experiments with proprioception in a self-organizing system for haptic perception. TAROS 239–245 (2007)
Wang, W., Li, S., Chen, L., Chen, D., Kühnlenz, K.: Fast Object Recognition and 6D Pose Estimation using Viewpoint Oriented Color-Shape Histogram. IICME (2013)
Uckermann, A., Haschke, R., Ritter, H.: Real-time 3D segmentation of cluttered scenes for robot grasping. Humanoids 198–203 (2012)
Rusu, R.B., Bradski, G., Thibaux, R., Hsu, J.: Fast 3D recognition and pose using the viewpoint feature histogram. IROS 2155–2162 (2010)
Kuehnle, J., Verl, A., Zhixing, X., Ruehl, S., Zoellner, J.M., Dillmann, R., Grundmann, T., Eidenberger, R., Zoellner, R.D.: 6D Object localization and obstacle detection for collision-free manipulation with a mobile service robot. ICAR 1–6 (2009)
Sorribes, J.J., Prats, M., Morales, A.: Visual tracking of a jaw gripper based on articulated 3D models for grasping. ICRA 2302–2307 (2010)
Sanchez-Lopez, J.R., Marin-Hernandez, A., Palacios-Hernandez, E.R.: Visual Detection, Tracking and Pose Estimation of a Robotic Arm End Effector, pp. 41–48. ROSSUM (2011)
Li, Q., Elbrechter, C., Haschke, R., Ritter, H.: Integrating vision, haptics and proprioception into a feedback controller for in-hand manipulation of unknown objects. IROS 2466–2471 (2013)
Festo Fin Ray.: http://www.festo.com/net/SupportPortal/Files/53886/AdaptiveGripper_DHDG_en.pdf
Wohlkinger, W., Aldoma, A., Rusu, R.B., Vincze, M.: 3DNet: largescale object class recognition from CAD models. ICRA 5384–5391 (2012)
Daras, P., Axenopoulos, A.: A 3D shape retrieval framework supporting multimodal queries. Int. J. Comput. Vis. 89, 229–247 (2010)
Lee, S., Lee, J., Moon, D., Kim, E., Seo, J.: Robust recognition and pose estimation of 3D objects based on evidence fusion in a sequence of images. ICRA 3773–3779 (2007)
Sian, N., Sakaguchi, T., Yokoi, K., Kawai, Y., Maruyama, K.: Operating humanoid robots in human environments. In: RSS Workshop: Manipulation for Human Environments (2006)
Wuthrich, M., Pastor, P., Kalakrishnan, M., Bohg, J., Schaal, S.: Probabilistic object tracking using a depth camera. IROS (2013)
Gratal, X., Bohg, J., Bjrkman, M., Kragic, D.: Scene representation and object grasping using active vision. In: IROS Workshop on Defining and Solving Realistic Perception Problems in Personal Robotics (2010)
Vahrenkamp, N., Wieland, S., Azad, P., Gonzalez, D., Asfour, T., Dillmann, R.: Visual servoing for humanoid grasping and manipulation tasks. Humanoids 406–412 (2008)
Urdiales, C., Dominguez, M., de Trazegnies, C., Sandoval, F.: A new pyramid-based color image representation for visual localization. Image Vis. Comput. 28(1), 78–91 (2010)
Villamizar, M., Sanfeliu, A., Andrade-Cetto, J.: Computation of rotation local invariant features using the integral image for real time object detection. ICPR 4, 81–85 (2006)
Wilson, P., Fernandez, J.: Facial feature detection using haar classifiers. J Comput. Sci. Coll. 21(4), 127–133 (2006)
Bay, H., Ess, A., Tuytelaars, T., Gool, L.V.: Surf: speeded up robust features. Comput. Vis. Image Underst. 110(3), 346–359 (2008)
Lowe, D.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60(2), 91–110 (2004)
Martínez-Martín, E., del Pobil, A.P.: Robust Motion Detection in Real-Life Scenarios. Springer (2012)
Brutzer, S., Hoferlin, B., Heideman, G.: Evaluation of background subtraction techniques for video surveillance. CVPR 1937–1944 (2011)
Martinez-Martin, E., del Pobil, A.P.: Visual object recognition for robot tasks in real-life scenarios. URAI 644–651 (2013)
Shrivakshan, G.T., Chandrasekar, C.: A comparison of various edge detection techniques used in image processing. Int. J. Comput. Sci. 9(5), no. 1, 269–276 (2012)
Lim, J., Zitnick, C.L., Dollar, P.: Sketch tokens: a learned mid-level representation for contour and object detection. CVPR 3158–3165 (2013)
Arbelaez, P., Maire, M., Fowlkes, C., Malik, J.: Contour detection and hierarchical image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 33(5), 898–916 (2011)
Acknowledgements
This work has been partially funded by by Ministerio de Economía y Competitividad (DPI2015-69041-R), Generalitat Valenciana (PROMETEOII/2014/028), and by the European Community’s Seventh Framework Programme (FP7/2007-2013) under grant agreement No. 288146 (Hobbit).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this paper
Cite this paper
Martinez-Martin, E., Fischinger, D., Vincze, M., del Pobil, A.P. (2017). An RGB-D Visual Application for Error Detection in Robot Grasping Tasks. In: Chen, W., Hosoda, K., Menegatti, E., Shimizu, M., Wang, H. (eds) Intelligent Autonomous Systems 14. IAS 2016. Advances in Intelligent Systems and Computing, vol 531. Springer, Cham. https://doi.org/10.1007/978-3-319-48036-7_18
Download citation
DOI: https://doi.org/10.1007/978-3-319-48036-7_18
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-48035-0
Online ISBN: 978-3-319-48036-7
eBook Packages: EngineeringEngineering (R0)