Advertisement

Adaptive Learning Methods for Autonomous Mobile Manipulation in RoboCup@Home

  • Raphael MemmesheimerEmail author
  • Viktor Seib
  • Tobias Evers
  • Daniel Müller
  • Dietrich Paulus
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11531)

Abstract

Team homer@UniKoblenz has become an integral part of the RoboCup@Home community. As such we would like to share our experience gained during the competitions with new teams. In this paper we describe our approaches with a special focus on our demonstration of this year’s finals. This includes semantic exploration, adaptive programming by demonstration and touch enforcing manipulation. We believe that these demonstrations have a potential to influence the design of future RoboCup@Home tasks. We also present our current research efforts in benchmarking imitation learning tasks, gesture recognition and a low cost autonomous robot platform. Our software can be found on GitHub at https://github.com/homer-robotics.

Notes

Acknowledgement

We want to thank the participating students that supported in the preparation, namely Ida Germann, Mark Mints, Patrik Schmidt, Isabelle Kuhlmann, Robin Bartsch, Lukas Buchhold, Christian Korbach, Thomas Weiland, Niko Schmidt, Ivanna Kramer. Further we want to thank our sponsors (Univeristy of Koblenz-Landau, Student parliament of the University of Koblenz-Landau Campus Koblenz, PAL Robotics, Einst e.V., CV e.V., Neoalto and KEVAG Telekom GmbH).

References

  1. 1.
    Seib, V., Manthe, S., Memmesheimer, R., Polster, F., Paulus, D.: Team homer@UniKoblenz — approaches and contributions to the RoboCup@Home competition. In: Almeida, L., Ji, J., Steinbauer, G., Luke, S. (eds.) RoboCup 2015. LNCS (LNAI), vol. 9513, pp. 83–94. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-29339-4_7CrossRefGoogle Scholar
  2. 2.
    Memmesheimer, R., Seib, V., Paulus, D.: homer@UniKoblenz: winning team of the RoboCup@Home open platform league 2017. In: Akiyama, H., Obst, O., Sammut, C., Tonidandel, F. (eds.) RoboCup 2017. LNCS (LNAI), vol. 11175, pp. 509–520. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-00308-1_42CrossRefGoogle Scholar
  3. 3.
    Pages, J., Marchionni, L., Ferro, F.: Tiago: the modular robot that adapts to different research needs. In: International Workshop on Robot Modularity, IROS (2016)Google Scholar
  4. 4.
    Seib, V., Memmesheimer, R., Paulus, D.: A ROS-based system for an autonomous service robot. In: Koubaa, A. (ed.) Robot Operating System (ROS). SCI, vol. 625, pp. 215–252. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-26054-9_9CrossRefGoogle Scholar
  5. 5.
    Seib, V., Theisen, N., Paulus, D.: Boosting 3D shape classification with global verification and redundancy-free codebooks. In: Tremeau, A., Farinella, G.M., Braz, J. (eds.) Proceedings of the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, vol. 5, pp. 257–264. SciTePress (2019)Google Scholar
  6. 6.
    He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask R-CNN. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2961–2969 (2017)Google Scholar
  7. 7.
    Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: unified, real-time object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 779–788 (2016)Google Scholar
  8. 8.
    Lin, T.-Y., et al.: Microsoft COCO: common objects in context. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 740–755. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-10602-1_48CrossRefGoogle Scholar
  9. 9.
    Reiner, B., Ertel, W., Posenauer, H., Schneider, M.: LAT: a simple learning from demonstration method. In: 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 4436–4441. IEEE (2014)Google Scholar
  10. 10.
    Buss, S.R., Kim, J.-S.: Selectively damped least squares for inverse kinematics. J. Graph. Tools 10(3), 37–49 (2005)CrossRefGoogle Scholar
  11. 11.
    Memmesheimer, R., Mykhalchyshyna, I., Seib, V., Evers, T., Paulus, D.: homer@UniKoblenz: winning team of the RoboCup@Home open platform league 2018. In: Holz, D., Genter, K., Saad, M., von Stryk, O. (eds.) RoboCup 2018. LNCS (LNAI), vol. 11374, pp. 512–523. Springer, Cham (2019).  https://doi.org/10.1007/978-3-030-27544-0_42CrossRefGoogle Scholar
  12. 12.
    Matamoros, M., et al.: Robocup@home 2019: Rules and regulations (draft) (2019). http://www.robocupathome.org/rules/2019_rulebook.pdf
  13. 13.
    Cao, Z., Simon, T., Wei, S.-E., Sheikh, Y.: Realtime multi-person 2D pose estimation using part affinity fields. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7291–7299 (2017)Google Scholar
  14. 14.
    Memmesheimer, R., Mykhalchyshyna, I., Seib, V., Paulus, D.: Simitate: a hybrid imitation learning benchmark. arXiv preprint arXiv:1905.06002 (2019)
  15. 15.
    Memmesheimer, R., et al.: Scratchy: a lightweight modular autonomous robot for robotic competitions. In: 2019 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC), pp. 1–6. IEEE (2019)Google Scholar
  16. 16.
    Memmesheimer, R., Mykhalchyshyna, I., Paulus, D.: Gesture recognition on human pose features of single images. In: 2018 International Conference on Intelligent Systems (IS), pp. 813–819. IEEE (2018)Google Scholar
  17. 17.
    Schneider, P., Memmesheimer, R., Kramer, I., Paulus, D.: Gesture recognition in RGB videos using human body keypoints and dynamic time warping. arXiv preprint arXiv:1906.12171 (2019)
  18. 18.
    Salvador, S., Chan, P.: Toward accurate dynamic time warping in linear time and space. Intell. Data Anal. 11(5), 561–580 (2007)CrossRefGoogle Scholar
  19. 19.
    Seib, V., Knauf, M., Paulus, D.: Affordance Origami: unfolding agent models for hierarchical affordance prediction. In: Braz, J., et al. (eds.) VISIGRAPP 2016. CCIS, vol. 693, pp. 555–574. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-64870-5_27CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Raphael Memmesheimer
    • 1
    Email author
  • Viktor Seib
    • 1
  • Tobias Evers
    • 1
  • Daniel Müller
    • 1
  • Dietrich Paulus
    • 1
  1. 1.Active Vision Group, Institute for Computational VisualisticsUniversity of Koblenz-LandauKoblenzGermany

Personalised recommendations