Abstract
Identification of suitable grasping pattern for numerous objects is a challenging computer vision task. It plays a vital role in robotics where a robotic hand is used to grasp different objects. Most of the work done in the area is based on 3D robotic grippers. An ample amount of work could also be found on humanoid robotic hands. However, there is negligible work on estimating grasping patterns from 2D images of various objects. In this paper, we propose a novel method to learn grasping patterns from images and data recorded from a dataglove, provided by the TUB Dataset. Our network retrains, a pre-trained deep Convolutional Neural Network (CNN) known as AlexNet, to learn deep features from images that correspond to human grasps. The results show that there are some interesting grasping patterns which are learned. In addition, we use two methods, Support Vector Machines (SVM) and hotelling’s T2 test to demonstrate that the dataset does include distinctive grasps for different objects. The results show promising grasping patterns that resembles actual human grasps.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
In the description of the TUB dataset 17 subjects are mentioned. However, data is recorded from 18 subjects in the real dataset.
References
Cyberglove ii cyberglove systems llc. http://www.cyberglovesystems.com/cyberglove-ii/. Accessed 12 Oct 2007
Abadi, M., et al.: TensorFlow: large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467 (2016)
Bengio, Y., et al.: Learning deep architectures for AI. Found. Trends® Mach. Learn. 2(1), 1-127 (2009)
Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: CVPR 2009 (2009)
Jiang, Y., Moseson, S., Saxena, A.: Efficient grasping from RGBD images: learning using a new rectangle representation. In: 2011 IEEE International Conference on Robotics and Automation (ICRA), pp. 3304–3311. IEEE (2011)
Kappler, D., Bohg, J., Schaal, S.: Leveraging big data for grasp planning. In: 2015 IEEE International Conference on Robotics and Automation (ICRA), pp. 4304–4311. IEEE (2015)
Kopicki, M., Rosales, C.J., Marino, H., Gabiccini, M., Wyatt, J.L.: Learning and inference of dexterous grasps for novel objects with underactuated hands. arXiv preprint arXiv:1609.07592 (2016)
Kragic, D., Christensen, H.I.: Robust visual servoing. Int. J. Robot. Res. 22(10–11), 923–939 (2003)
Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems, pp. 1097–1105 (2012)
Le, Q.V.: Building high-level features using large scale unsupervised learning. In: 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 8595–8598. IEEE (2013)
Lenz, I., Lee, H., Saxena, A.: Deep learning for detecting robotic grasps. Int. J. Robot. Res. 34(4–5), 705–724 (2015)
Levine, S., Pastor, P., Krizhevsky, A., Ibarz, J., Quillen, D.: Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection. Int. J. Robot. Res. 37, 421–436 (2016). https://doi.org/10.1177/0278364917710318
Maitin-Shepard, J., Cusumano-Towner, M., Lei, J., Abbeel, P.: Cloth grasp point detection based on multiple-view geometric cues with application to robotic towel folding. In: 2010 IEEE International Conference on Robotics and Automation (ICRA), pp. 2308–2315. IEEE (2010)
Miller, A.T., Miller, A.T.: Graspit!: a versatile simulator for robotic grasping. IEEE Robot. Autom. Mag. 11, 110–122 (2004)
Puhlmann, S., Heinemann, F., Brock, O., Maertens, M.: A compact representation of human single-object grasping. In: 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1954–1959. IEEE (2016)
Rogez, G., Supancic, J.S., Ramanan, D.: Understanding everyday hands in action from RGB-D images. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 3889–3897 (2015)
Rosales, C., Spinelli, F., Gabiccini, M., Zito, C., Wyatt, J.L.: Gpatlasrrt: a local tactile exploration planner for recovering the shape of novel objects. Int. J. Humanoid Robot. 15(01), 1850014 (2018)
Saxena, A., Driemeyer, J., Kearns, J., Ng, A.Y.: Robotic grasping of novel objects. In: Advances in neural information processing systems, pp. 1209–1216 (2007)
Saxena, A., Wong, L.L., Ng, A.Y.: Learning grasp strategies with partial shape information. In: AAAI, vol. 3, pp. 1491–1494 (2008)
Sohn, K., Jung, D.Y., Lee, H., Hero, A.O.: Efficient learning of sparse, distributed, convolutional feature representations for object recognition. In: 2011 IEEE International Conference on Computer Vision (ICCV), pp. 2643–2650. IEEE (2011)
Varley, J., Weisz, J., Weiss, J., Allen, P.: Generating multi-fingered robotic grasps via deep learning. In: 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4415–4420. IEEE (2015)
Yu, J., Weng, K., Liang, G., Xie, G.: A vision-based robotic grasping system using deep learning for 3D object recognition and pose estimation. In: 2013 IEEE International Conference on Robotics and Biomimetics (ROBIO), pp. 1175–1180. IEEE (2013)
Zhang, L.E., Ciocarlie, M., Hsiao, K.: Grasp evaluation with graspable feature matching. In: RSS Workshop on Mobile Manipulation: Learning to Manipulate (2011)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer International Publishing AG, part of Springer Nature
About this paper
Cite this paper
Zia, A., Tiddeman, B., Shaw, P. (2018). Estimating Grasping Patterns from Images Using Finetuned Convolutional Neural Networks. In: Giuliani, M., Assaf, T., Giannaccini, M. (eds) Towards Autonomous Robotic Systems. TAROS 2018. Lecture Notes in Computer Science(), vol 10965. Springer, Cham. https://doi.org/10.1007/978-3-319-96728-8_6
Download citation
DOI: https://doi.org/10.1007/978-3-319-96728-8_6
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-96727-1
Online ISBN: 978-3-319-96728-8
eBook Packages: Computer ScienceComputer Science (R0)