Advertisement

Learning Semantic Interaction among Graspable Objects

  • Swagatika Panda
  • A. H. Abdul Hafez
  • C. V. Jawahar
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8251)

Abstract

In this work, we aim at understanding semantic interaction among graspable objects in both direct and indirect physical contact for robotic manipulation tasks. Given an object of interest, its support relationship with other graspable objects is inferred hierarchically. The support relationship is used to predict the “support order” or the order in which the surrounding objects need to be removed in order to manipulate the target object. We believe, this can extend the scope of robotic manipulation tasks to typical clutter involving physical contact, overlap and objects of generic shapes and sizes. We have created an RGBD dataset consisting of various objects present in clutter using Kinect. We conducted our experimentation and analysed the performance of our work on the images from the same dataset.

Keywords

Robotic Vision Support Relation Support Order Semantic Interaction RGBD 

References

  1. 1.
    Arbelaez, P., Maire, M., Fowlkes, C., Malik, J.: Contour detection and hierarchical image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence 33(5), 898–916 (2011)CrossRefGoogle Scholar
  2. 2.
    Dogar, M., Hsiao, K., Ciocarlie, M., Srinivasa, S.: Physics-based grasp planning through clutter. In: RSS VIII (July 2012)Google Scholar
  3. 3.
    Harris, C., Stephens, M.: A combined corner and edge detector. In: Alvey Vision Conference, Manchester, UK (1988)Google Scholar
  4. 4.
    Hoiem, D., Efros, A.A., Hebert, M.: Recovering occlusion boundaries from an image. International Journal of Computer Vision 91(3), 328–346 (2011)MathSciNetCrossRefzbMATHGoogle Scholar
  5. 5.
    Levin, A., Lischinski, D., Weiss, Y.: Colorization using optimization. ACM Transactions on Graphics (TOG) 23(3), 689–694 (2004)CrossRefGoogle Scholar
  6. 6.
    Lowe, D.G.: Distinctive image features from scale-invariant keypoints. International Journal of Computer Vision 60(2), 91–110 (2004)CrossRefGoogle Scholar
  7. 7.
    Panda, S., Abdul Hafez, A.H., Jawahar, C.V.: Learning support order for manipulation in clutter. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS (2013)Google Scholar
  8. 8.
    Rasolzadeh, B., Björkman, M., Huebner, K., Kragic, D.: An active vision system for detecting, fixating and manipulating objects in the real world. The International Journal of Robotics Research 29(2-3), 133–154 (2010)CrossRefGoogle Scholar
  9. 9.
    Rosman, B., Ramamoorthy, S.: Learning spatial relationships between objects. The International Journal of Robotics Research 30(11), 1328–1342 (2011)CrossRefGoogle Scholar
  10. 10.
    Silberman, N., Hoiem, D., Kohli, P., Fergus, R.: Indoor segmentation and support inference from RGBD images. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012, Part V. LNCS, vol. 7576, pp. 746–760. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  11. 11.
    Sjoo, K., Jensfelt, P.: Learning spatial relations from functional simulation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1513–1519. IEEE (2011)Google Scholar
  12. 12.
    Vedaldi, A., Fulkerson, B.: VLFeat: An open and portable library of computer vision algorithms (2008), http://www.vlfeat.org/

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Swagatika Panda
    • 1
  • A. H. Abdul Hafez
    • 1
  • C. V. Jawahar
    • 1
  1. 1.International Institute of Information TechnologyHyderabadIndia

Personalised recommendations