RGB-D-Based Features for Recognition of Textureless Objects

  • Santosh Thoduka
  • Stepan Pazekha
  • Alexander Moriarty
  • Gerhard K. Kraetzschmar
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9776)

Abstract

Autonomous industrial robots need to recognize objects robustly in cluttered environments. The use of RGB-D cameras has progressed research in 3D object recognition, but it is still a challenge for textureless objects. We propose a set of features, including the bounding box, mean circle fit and radial density distribution, that describe the size, shape and colour of objects. The features are extracted from point clouds of a set of objects and used to train an SVM classifier. Various combinations of the proposed features are tested to determine their influence on the recognition rate. Medium-sized objects are recognized with high accuracy whereas small objects have a lower recognition rate. The minimum range and resolution of the cameras are still an issue but are expected to improve as the technology improves.

Keywords

Object recognition Machine learning Textureless objects RGB-D data Coloured pointclouds 

Notes

Acknowledgements

We gratefully acknowledge the continued support of the RoboCup team by the b-it Bonn-Aachen International Center for Information Technology and the Bonn-Rhein-Sieg University of Applied Sciences.

References

  1. 1.
    Factories of the Future. http://www.era.eu/attachments/article/129/FactoriesoftheFuture2020Roadmap.pdf (2013). Accessed 08 Mar 2016
  2. 2.
    Ahmed, S., Jandt, T., Kulkarni, P., Lima, O., Mallick, A., Moriarty, A., Nair, D., Thoduka, S., Awaad, I., Dwiputra, R., Hegger, F., Hochgeschwender, N., Sanchez, J., Schneider, S., Kraetzschmar, G.K.: b-it-bots RoboCup@Work team description paper. In: RoboCup. Leipzig, Germany (2016). https://mas-group.inf.h-brs.de/wp-content/uploads/2016/01/tdp_b-it-bots_atwork_2016.pdf
  3. 3.
    Carstensen, J., Hochgeschwender, N., Kraetzschmar, G., Nowak, W., Zug, S.: RoboCup@Work Rulebook Version 2016 (2016). http://www.robocupatwork.org/download/rulebook-2016-01-15.pdf. Accessed 08 Mar 2016
  4. 4.
    Dwiputra, R., Berghofer, J., Ahmad, A., Awaad, I., Amigoni, F., Bischoff, R., Bonarini, A., Fontana, G., Hegger, F., Hochgeschwender, N., Iocchi, L., Kraetzschmar, G., Lima, P., Matteucci, M., Nardi, D., Schiaffonati, V., Schneider, S.: The RoCKIn@Work challenge. In: Proceedings of 41st International Symposium on Robotics, ISR/Robotik 2014, pp. 1–6 (2014)Google Scholar
  5. 5.
    Hinterstoisser, S., Holzer, S., Cagniart, C., Ilic, S., Konolige, K., Navab, N., Lepetit, V.: Multimodal templates for real-time detection of texture-less objects in heavily cluttered scenes. In: 2011 IEEE International Conference on Computer Vision (ICCV), pp. 858–865. IEEE (2011)Google Scholar
  6. 6.
    Hu, M.K.: Visual pattern recognition by moment invariants. IRE Trans. Inf. Theory 8(2), 179–187 (1962)CrossRefGoogle Scholar
  7. 7.
    Jiang, L., Koch, A., Scherer, S.A., Zell, A.: Multi-class fruit classification using RGB-D data for indoor robots. In: 2013 IEEE International Conference on Robotics and Biomimetics (ROBIO), pp. 587–592. IEEE (2013)Google Scholar
  8. 8.
    Karpathy, A., Miller, S., Fei-Fei, L.: Object discovery in 3D scenes via shape analysis. In: 2013 IEEE International Conference on Robotics and Automation (ICRA), pp. 2088–2095. IEEE (2013)Google Scholar
  9. 9.
    Kraetzschmar, G.K., Hochgeschwender, N., Nowak, W., Hegger, F., Schneider, S., Dwiputra, R., Berghofer, J., Bischoff, R.: RoboCup@Work: competing for the factory of the future. In: Bianchi, R.A.C., Akin, H.L., Ramamoorthy, S., Sugiura, K. (eds.) RoboCup 2014. LNCS (LNAI), vol. 8992, pp. 171–182. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-18615-3_14CrossRefGoogle Scholar
  10. 10.
    Mustafa, W., Pugeault, N., Kruger, N.: Multi-view object recognition using view-point invariant shape relations and appearance information. In: 2013 IEEE International Conference on Robotics and Automation (ICRA), pp. 4230–4237. IEEE (2013)Google Scholar
  11. 11.
    Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., Vanderplas, J., Passos, A., Cournapeau, D., Brucher, M., Perrot, M., Duchesnay, E.: Scikit-learn: machine learning in python. J. Mach. Learn. Res. 12, 2825–2830 (2011)MathSciNetMATHGoogle Scholar
  12. 12.
    Rusu, R.B., Cousins, S.: 3D is here: point cloud library (PCL). In: 2011 IEEE International Conference on Robotics and Automation (ICRA), pp. 1–4. IEEE (2011)Google Scholar
  13. 13.
    Tombari, F., Salti, S., Stefano, L.: Unique signatures of histograms for local surface description. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010. LNCS, vol. 6313, pp. 356–369. Springer, Heidelberg (2010).  https://doi.org/10.1007/978-3-642-15558-1_26CrossRefGoogle Scholar
  14. 14.
    Tombari, F., Salti, S., Stefano, L.D.: A combined texture-shape descriptor for enhanced 3D feature matching. In: 2011 18th IEEE International Conference on Image Processing (ICIP), pp. 809–812. IEEE (2011)Google Scholar
  15. 15.
    Wang, W., Chen, L., Chen, D., Li, S., Kuhnlenz, K.: Fast object recognition and 6D pose estimation using viewpoint oriented color-shape histogram. In: 2013 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6. IEEE (2013)Google Scholar
  16. 16.
    Wohlkinger, W., Vincze, M.: Ensemble of shape functions for 3D Object Classification. In: 2011 IEEE International Conference on Robotics and Biomimetics (ROBIO), pp. 2987–2992. IEEE (2011)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Santosh Thoduka
    • 1
  • Stepan Pazekha
    • 1
  • Alexander Moriarty
    • 1
  • Gerhard K. Kraetzschmar
    • 1
  1. 1.Department of Computer ScienceBonn-Rhein-Sieg University of Applied SciencesSankt AugustinGermany

Personalised recommendations