Skip to main content

Communicating Unknown Objects to Robots through Pointing Gestures

  • Conference paper
Advances in Autonomous Robotics Systems (TAROS 2014)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 8717))

Included in the following conference series:

Abstract

Delegating tasks from a human to a robot needs an efficient and easy-to-use communication pipeline between them - especially when inexperienced users are involved. This work presents a robotic system that is able to bridge this communication gap by exploiting 3D sensing for gesture recognition and real-time object segmentation. We visually extract an unknown object indicated by a human through a pointing gesture and thereby communicating the object of interest to the robot which can be used to perform a certain task. The robot uses RGB-D sensors to observe the human and find the 3D point indicated by the pointing gesture. This point is used to initialize a fixation-based, fast object segmentation algorithm, inferring thus the outline of the whole object. A series of experiments with different objects and pointing gestures show that both the recognition of the gesture, the extraction of the pointing direction in 3D, and the object segmentation perform robustly. The discussed system can provide the first step towards more complex tasks, such as object recognition, grasping or learning by demonstration with obvious value in both industrial and domestic settings.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Droeschel, D., Stuckler, J., Behnke, S.: Learning to interpret pointing gestures with a time-of-flight camera. In: ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp. 481–488 (2011)

    Google Scholar 

  2. Ilea, D.E., Whelan, P.F.: Image segmentation based on the integration of colour-texture descriptors - A review. Pattern Recognition 44(10-11), 2479–2501 (2011)

    Article  MATH  Google Scholar 

  3. Kehl, R., Van Gool, L.: Real-time pointing gesture recognition for an immersive environment. In: IEEE International Conference on Automatic Face and Gesture Recognition, pp. 577–582 (2004)

    Google Scholar 

  4. Krüger, V., Kragic, D., Ude, A., Geib, C.: The meaning of action: A review on action recognition and mapping. Advanced Robotics 21(13), 1473–1501 (2007)

    Google Scholar 

  5. Mishra, A., Aloimonos, Y.: Visual segmentation of simple objects for robots. In: Robotics: Science and Systems, Los Angeles, CA, USA (June 2011)

    Google Scholar 

  6. Mishra, A., Aloimonos, Y., Fah, C.L.: Active segmentation with fixation. In: IEEE International Conference on Computer Vision, pp. 468–475 (2009)

    Google Scholar 

  7. Nalpantidis, L., Großmann, B., Krüger, V.: Fast and accurate unknown object segmentation for robotic systems. In: Bebis, G., et al. (eds.) ISVC 2013, Part II. LNCS, vol. 8034, pp. 318–327. Springer, Heidelberg (2013)

    Chapter  Google Scholar 

  8. Nickel, K., Stiefelhagen, R.: Visual recognition of pointing gestures for human-robot interaction. Image and Vision Computing 25(12), 1875–1884 (2007)

    Article  Google Scholar 

  9. Pedersen, M.R., Nalpantidis, L., Bobick, A., Krüger, V.: On the integration of hardware-abstracted robot skills for use in industrial scenarios. In: 2nd International Workshop on Cognitive Robotics Systems: Replicating Human Actions and Activities (2013)

    Google Scholar 

  10. PrimeSense Inc.: Prime Sensor NITE Algorithms 1.5 (2011), http://www.primesense.com

  11. Quintero, C.P., Fomena, R.T., Shademan, A., Wolleb, N., Dick, T., Jagersand, M.: SEPO: Selecting by pointing as an intuitive human-robot command interface. In: 2013 IEEE International Conference on Robotics and Automation (ICRA), pp. 1166–1171 (2013)

    Google Scholar 

  12. Rother, C., Kolmogorov, V., Blake, A.: “GrabCut”: Interactive foreground extraction using iterated graph cuts. ACM Transactions on Graphics 23(3), 309–314 (2004)

    Article  Google Scholar 

  13. Rusu, R.B., Holzbach, A., Bradski, G., Beetz, M.: Detecting and segmenting objects for mobile manipulation. In: Proceedings of IEEE Workshop on Search in 3D and Video (S3DV), held in conjunction with the 12th IEEE International Conference on Computer Vision (ICCV), Kyoto, Japan, September 27, pp. 47–54 (2009)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer International Publishing Switzerland

About this paper

Cite this paper

Großmann, B., Pedersen, M.R., Klonovs, J., Herzog, D., Nalpantidis, L., Krüger, V. (2014). Communicating Unknown Objects to Robots through Pointing Gestures. In: Mistry, M., Leonardis, A., Witkowski, M., Melhuish, C. (eds) Advances in Autonomous Robotics Systems. TAROS 2014. Lecture Notes in Computer Science(), vol 8717. Springer, Cham. https://doi.org/10.1007/978-3-319-10401-0_19

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-10401-0_19

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-10400-3

  • Online ISBN: 978-3-319-10401-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics