Skip to main content

An RGB-D Visual Application for Error Detection in Robot Grasping Tasks

  • Conference paper
  • First Online:
Intelligent Autonomous Systems 14 (IAS 2016)

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 531))

Included in the following conference series:

Abstract

The ability to grasp is a fundamental requirement for service robots in order to perform meaningful tasks in ordinary environments. However, its robustness can be compromised by the inaccuracy (or lack) of tactile and proprioceptive sensing, especially in the presence of unforeseen slippage. As a solution, vision can be instrumental in detecting grasp errors. In this paper, we present an RGB-D visual application for discerning the success or failure in robot grasping of unknown objects, when a poor proprioceptive information and/or a deformable gripper without tactile information is used. The proposed application is divided into two stages: the visual gripper detection and recognition, and the grasping assessment (i.e. checking whether a grasping error has occurred). For that, three different visual cues are combined: colour, depth and edges. This development is supported by the experimental results on the Hobbit robot which is provided with an elastically deformable gripper.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 259.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 329.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Johansson, R.S., Flanagan, J.R.: Coding and use of tactile signals from the fingertips in object manipulation tasks. Nat. Rev. Neurosci. 10, 345–359 (2009)

    Article  Google Scholar 

  2. Srinivasan, M.A., Whitehouse, J.M., LaMotte, R.H.: Tactile detection of slip: surface microgeometry and peripheral neural codes. J Neurophysiol. 63(6), 1323–1332 (1990)

    Google Scholar 

  3. Cutkosky, M.R., Howe, R.D., Provancher, W.R.: Force and tactile sensors. In: Siciliano, B., Khatib, O. (eds.) Springer Handbook of Robotics. Springer, New York (2008)

    Google Scholar 

  4. Dahiya, R.S., Metta, G., Valle, M., Sandini, G.: Tactile sensing—from humans to humanoids. IEEE Trans. Robot. 26(1) (2010)

    Google Scholar 

  5. Lee, M.H.: Tactile sensing: new directions, new challenges. Int. J. Robot. Res. 19(7), 636–643 (2000)

    Google Scholar 

  6. Prats, M., del Pobil, A.P., Sanz, P.J.: Robot Physical Interaction Through the Combination of Vision, Tactile and Force Feedback. Springer Tracts in Advanced Robotics, vol. 84(2013)

    Google Scholar 

  7. Haidacher, S., Hirzinger, G.: Contact Point Identification in Multi-Fingered Grasps Exploiting Kinematic Constraints, pp. 1597–1603. ICRA (2002)

    Google Scholar 

  8. Dupont, P., Schulteis, T., Millman, P., Howe, R.D.: Automatic identification of environment haptic properties. Presence: Teleoperators Virtual Environ. 8(4), 392–409 (1999)

    Article  Google Scholar 

  9. Deckers, P., Dollar, A.M., Howe, R.D.: Guiding grasping with proprioception and Markov models. In: Workshop on Robot Manipulation: Sensing and Adapting to the Real World (2007)

    Google Scholar 

  10. Johnsson, M., Balkenius, C.: Experiments with proprioception in a self-organizing system for haptic perception. TAROS 239–245 (2007)

    Google Scholar 

  11. Wang, W., Li, S., Chen, L., Chen, D., Kühnlenz, K.: Fast Object Recognition and 6D Pose Estimation using Viewpoint Oriented Color-Shape Histogram. IICME (2013)

    Google Scholar 

  12. Uckermann, A., Haschke, R., Ritter, H.: Real-time 3D segmentation of cluttered scenes for robot grasping. Humanoids 198–203 (2012)

    Google Scholar 

  13. Rusu, R.B., Bradski, G., Thibaux, R., Hsu, J.: Fast 3D recognition and pose using the viewpoint feature histogram. IROS 2155–2162 (2010)

    Google Scholar 

  14. Kuehnle, J., Verl, A., Zhixing, X., Ruehl, S., Zoellner, J.M., Dillmann, R., Grundmann, T., Eidenberger, R., Zoellner, R.D.: 6D Object localization and obstacle detection for collision-free manipulation with a mobile service robot. ICAR 1–6 (2009)

    Google Scholar 

  15. Sorribes, J.J., Prats, M., Morales, A.: Visual tracking of a jaw gripper based on articulated 3D models for grasping. ICRA 2302–2307 (2010)

    Google Scholar 

  16. Sanchez-Lopez, J.R., Marin-Hernandez, A., Palacios-Hernandez, E.R.: Visual Detection, Tracking and Pose Estimation of a Robotic Arm End Effector, pp. 41–48. ROSSUM (2011)

    Google Scholar 

  17. Li, Q., Elbrechter, C., Haschke, R., Ritter, H.: Integrating vision, haptics and proprioception into a feedback controller for in-hand manipulation of unknown objects. IROS 2466–2471 (2013)

    Google Scholar 

  18. Festo Fin Ray.: http://www.festo.com/net/SupportPortal/Files/53886/AdaptiveGripper_DHDG_en.pdf

  19. Wohlkinger, W., Aldoma, A., Rusu, R.B., Vincze, M.: 3DNet: largescale object class recognition from CAD models. ICRA 5384–5391 (2012)

    Google Scholar 

  20. Daras, P., Axenopoulos, A.: A 3D shape retrieval framework supporting multimodal queries. Int. J. Comput. Vis. 89, 229–247 (2010)

    Article  Google Scholar 

  21. Lee, S., Lee, J., Moon, D., Kim, E., Seo, J.: Robust recognition and pose estimation of 3D objects based on evidence fusion in a sequence of images. ICRA 3773–3779 (2007)

    Google Scholar 

  22. Sian, N., Sakaguchi, T., Yokoi, K., Kawai, Y., Maruyama, K.: Operating humanoid robots in human environments. In: RSS Workshop: Manipulation for Human Environments (2006)

    Google Scholar 

  23. Wuthrich, M., Pastor, P., Kalakrishnan, M., Bohg, J., Schaal, S.: Probabilistic object tracking using a depth camera. IROS (2013)

    Google Scholar 

  24. Gratal, X., Bohg, J., Bjrkman, M., Kragic, D.: Scene representation and object grasping using active vision. In: IROS Workshop on Defining and Solving Realistic Perception Problems in Personal Robotics (2010)

    Google Scholar 

  25. Vahrenkamp, N., Wieland, S., Azad, P., Gonzalez, D., Asfour, T., Dillmann, R.: Visual servoing for humanoid grasping and manipulation tasks. Humanoids 406–412 (2008)

    Google Scholar 

  26. Urdiales, C., Dominguez, M., de Trazegnies, C., Sandoval, F.: A new pyramid-based color image representation for visual localization. Image Vis. Comput. 28(1), 78–91 (2010)

    Article  Google Scholar 

  27. Villamizar, M., Sanfeliu, A., Andrade-Cetto, J.: Computation of rotation local invariant features using the integral image for real time object detection. ICPR 4, 81–85 (2006)

    Google Scholar 

  28. Wilson, P., Fernandez, J.: Facial feature detection using haar classifiers. J Comput. Sci. Coll. 21(4), 127–133 (2006)

    Google Scholar 

  29. Bay, H., Ess, A., Tuytelaars, T., Gool, L.V.: Surf: speeded up robust features. Comput. Vis. Image Underst. 110(3), 346–359 (2008)

    Article  Google Scholar 

  30. Lowe, D.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60(2), 91–110 (2004)

    Article  Google Scholar 

  31. Martínez-Martín, E., del Pobil, A.P.: Robust Motion Detection in Real-Life Scenarios. Springer (2012)

    Google Scholar 

  32. Brutzer, S., Hoferlin, B., Heideman, G.: Evaluation of background subtraction techniques for video surveillance. CVPR 1937–1944 (2011)

    Google Scholar 

  33. Martinez-Martin, E., del Pobil, A.P.: Visual object recognition for robot tasks in real-life scenarios. URAI 644–651 (2013)

    Google Scholar 

  34. Shrivakshan, G.T., Chandrasekar, C.: A comparison of various edge detection techniques used in image processing. Int. J. Comput. Sci. 9(5), no. 1, 269–276 (2012)

    Google Scholar 

  35. Lim, J., Zitnick, C.L., Dollar, P.: Sketch tokens: a learned mid-level representation for contour and object detection. CVPR 3158–3165 (2013)

    Google Scholar 

  36. Arbelaez, P., Maire, M., Fowlkes, C., Malik, J.: Contour detection and hierarchical image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 33(5), 898–916 (2011)

    Google Scholar 

Download references

Acknowledgements

This work has been partially funded by by Ministerio de Economía y Competitividad (DPI2015-69041-R), Generalitat Valenciana (PROMETEOII/2014/028), and by the European Community’s Seventh Framework Programme (FP7/2007-2013) under grant agreement No. 288146 (Hobbit).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ester Martinez-Martin .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Martinez-Martin, E., Fischinger, D., Vincze, M., del Pobil, A.P. (2017). An RGB-D Visual Application for Error Detection in Robot Grasping Tasks. In: Chen, W., Hosoda, K., Menegatti, E., Shimizu, M., Wang, H. (eds) Intelligent Autonomous Systems 14. IAS 2016. Advances in Intelligent Systems and Computing, vol 531. Springer, Cham. https://doi.org/10.1007/978-3-319-48036-7_18

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-48036-7_18

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-48035-0

  • Online ISBN: 978-3-319-48036-7

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics