Learning Semantic Interaction among Graspable Objects
In this work, we aim at understanding semantic interaction among graspable objects in both direct and indirect physical contact for robotic manipulation tasks. Given an object of interest, its support relationship with other graspable objects is inferred hierarchically. The support relationship is used to predict the “support order” or the order in which the surrounding objects need to be removed in order to manipulate the target object. We believe, this can extend the scope of robotic manipulation tasks to typical clutter involving physical contact, overlap and objects of generic shapes and sizes. We have created an RGBD dataset consisting of various objects present in clutter using Kinect. We conducted our experimentation and analysed the performance of our work on the images from the same dataset.
KeywordsRobotic Vision Support Relation Support Order Semantic Interaction RGBD
- 2.Dogar, M., Hsiao, K., Ciocarlie, M., Srinivasa, S.: Physics-based grasp planning through clutter. In: RSS VIII (July 2012)Google Scholar
- 3.Harris, C., Stephens, M.: A combined corner and edge detector. In: Alvey Vision Conference, Manchester, UK (1988)Google Scholar
- 7.Panda, S., Abdul Hafez, A.H., Jawahar, C.V.: Learning support order for manipulation in clutter. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS (2013)Google Scholar
- 11.Sjoo, K., Jensfelt, P.: Learning spatial relations from functional simulation. In: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1513–1519. IEEE (2011)Google Scholar
- 12.Vedaldi, A., Fulkerson, B.: VLFeat: An open and portable library of computer vision algorithms (2008), http://www.vlfeat.org/