Advertisement

A Semantic-Based Method for Teaching Industrial Robots New Tasks

  • Karinne Ramirez-AmaroEmail author
  • Emmanuel Dean-Leon
  • Florian Bergner
  • Gordon Cheng
Project Report

Abstract

This paper presents the results of the Artificial Intelligence (AI) method developed during the European project “Factory-in-a-day”. Advanced AI solutions, as the one proposed, allow a natural Human–Robot-collaboration, which is an important capability of robots in industrial warehouses. This new generation of robots is expected to work in heterogeneous production lines by efficiently interacting and collaborating with human co-workers in open and unstructured dynamic environments. For this, robots need to understand and recognize the demonstrations from different operators. Therefore, a flexible and modular process to program industrial robots has been developed based on semantic representations. This novel learning by demonstration method enables non-expert operators to program new tasks on industrial robots.

Keywords

Semantic representations Knowledge and reasoning Teaching by demonstration 

Notes

Acknowledgements

We would like to thank our colleagues Katharina Stadler and Wibke Borngesser for all their support during the project Factory-in-a-day.

This work was supported by the European Community Seventh Framework Programme (FP7/2007-2013) under Grant agreement no. 609206 and it has been (partially) supported by the German Research Foundation DFG, as part of Collaborative Research Center (Sonderforschungsbereich) 1320 “EASE—Everyday Activity Science and Engineering”, University of Bremen.

References

  1. 1.
    Aggarwal JK, Ryoo MS (2011) Human activity analysis: a review. ACM Comput Surv 43(3):16CrossRefGoogle Scholar
  2. 2.
    Aksoy EE, Abramov A, Dörr J, Ning K, Dellen B, Wörgötter F (2011) Learning the semantics of object-action relations by observation. Int J Robot Res 30(10):1229–1249CrossRefGoogle Scholar
  3. 3.
    Antol S, Zitnick CL, Parikh D (2014) Zero-shot learning via visual abstraction. In: Fleet D, Pajdla T, Schiele B, Tuytelaars T (eds) Computer vision – ECCV 2014. Lecture notes in computer science, vol 8692. Springer, ChamGoogle Scholar
  4. 4.
    Bates T, Ramirez-Amaro K, Inamura T, Cheng G (2017) On-line simultaneous learning and recognition of everyday activities from virtual reality performances. In: IEEE/RSJ international conference on intelligent robots and systems (IROS), pp 3510–3515. IEEEGoogle Scholar
  5. 5.
    Beetz M, Tenorth M, Jain D, Bandouch J (2010) Towards automated models of activities of daily life. Technol Disab 22:27–40Google Scholar
  6. 6.
    Billard A, Calinon S, Dillmann R, Schaal S (2008) Robot programming by demonstration. In: Siciliano B, Khatib O (eds) Springer handbook of robotics. Springer, Berlin, HeidelbergGoogle Scholar
  7. 7.
    Calinon S, D’halluin F, Sauser EL, Caldwell DG, Billard AG (2010) Learning and reproduction of gestures by imitation: an approach based on hidden markov model and gaussian mixture regression. Robot Autom Mag 17(2):44–54CrossRefGoogle Scholar
  8. 8.
    Cheng G, Ramirez-Amaro K, Beetz M, Kuniyoshi Y (2019) Purposive learning: robot reasoning about the meanings of human activities. Sci Robot 4(26).  https://doi.org/10.1126/scirobotics.aav1530
  9. 9.
    Dean-Leon E, Pierce B, Bergner F, Mittendorfer P, Ramirez-Amaro K, Burger W, Cheng G (2017) TOMM: tactile omnidirectional mobile manipulator. In: IEEE international conference on robotics and automation (ICRA), pp 2441–2447Google Scholar
  10. 10.
    Dean-Leon EC, Ramirez-Amaro K, Bergner F, Dianov I, Cheng G (2018) Integration of robotic technologies for rapidly deployable robots. IEEE Trans Ind Inf 14(4):1691–1700CrossRefGoogle Scholar
  11. 11.
    Dean-Leon EC, Ramirez-Amaro K, Bergner F, Dianov I, Lanillos P, Cheng G (2016) Robotic technologies for fast deployment of industrial robot systems. In: IECON, IEEE, pp 6900–6907Google Scholar
  12. 12.
    Dianov I, Ramírez-Amaro K, Lanillos P, Dean-Leon E, Bergner F, Cheng G (2016) Extracting general task structures to accelerate the learning of new tasks. In: IEEE-RAS 16th international conference on humanoid robots (Humanoids), pp 802–807Google Scholar
  13. 13.
    Dillmann R, Asfour T, Do M, Jäkel R, Kasper A, Azad P, Ude A, Schmidt-Rohr SR, Lösch M (2010) Advances in robot programming by demonstration. KI 24(4):295–303Google Scholar
  14. 14.
    Ko WKH, Wu Y, Tee KP, Buchli J (2015) Towards industrial robot learning from demonstration. In: Lee M, Omori T, Osawa H, Park H, Young JE (eds) HAI. ACM, New York, pp 235–238Google Scholar
  15. 15.
    Kormushev P, Calinon S, Caldwell DG (2011) Imitation learning of positional and force skills demonstrated via kinesthetic teaching and haptic input. Adv Robot 25(5):581–603CrossRefGoogle Scholar
  16. 16.
    Kuniyoshi Y, Inoue H (1993) Qualitative recognition of ongoing human action sequences. In: Bajcsy R (ed) IJCAI. Morgan Kaufmann, Burlington, pp 1600–1609Google Scholar
  17. 17.
    Lei J, Ren X, Fox D (2012) Fine-Grained Kitchen Activity Recognition using RGB-D. In: The 14th international conference on ubiquitous computing (Ubicomp 2012)Google Scholar
  18. 18.
    Quinlan R (1993) C4.5: programs for machine learning. Morgan Kaufmann Publishers, San MateoGoogle Scholar
  19. 19.
    Ramirez-Amaro K, Beetz M, Cheng G (2014) Automatic segmentation and recognition of human activities from observation based on semantic reasoning. In: IEEE/RSJ international conference on intelligent robots and systems (IROS), IEEE, pp 5043–5048Google Scholar
  20. 20.
    Ramirez-Amaro K, Beetz M, Cheng G (2015) Understanding the intention of human activities through semantic perception: observation, understanding and execution on a humanoid robot. Adv Robot 29(5):345–362CrossRefGoogle Scholar
  21. 21.
    Ramirez-Amaro K, Beetz M, Cheng G (2017) Transferring skills to humanoid robots by extracting semantic representations from observations of human activities. Artif Intell 247:95–118 (special issue on AI and robotics)MathSciNetCrossRefzbMATHGoogle Scholar
  22. 22.
    Ramirez-Amaro K, Dean-Leon EC, Dianov I, Bergner F, Cheng G (2016) General recognition models capable of integrating multiple sensors for different domains. In: Humanoids, IEEE, pp 306–311Google Scholar
  23. 23.
    Ramirez-Amaro K, Inamura T, Dean-Leon EC, Beetz M, Cheng G (2014) Bootstrapping humanoid robot skills by extracting semantic representations of human-like activities from virtual reality. In: Humanoids, IEEE, pp 438–443Google Scholar
  24. 24.
    Ramirez-Amaro K, Minhas HN, Zehetleitner M, Beetz M, Cheng G (2017) Added value of gaze-exploiting semantic representation to allow robots inferring human behaviors. ACM Trans Interact Intell Syst 7(1):5:1–5:30CrossRefGoogle Scholar
  25. 25.
    Summers-Stay D, Teo CL, Yang Y, Fermüller C, Aloimonos Y (2012) Using a minimal action grammar for activity understanding in the real world. In: IEEE/RSJ international conference on intelligent robots and systems (IROS), IEEE, pp 4104–4111Google Scholar
  26. 26.
    Tenorth M, Beetz M (2017) Representations for robot knowledge in the KnowRob framework. Artif Intell 247:151–169 (special issue on AI and robotics)MathSciNetCrossRefzbMATHGoogle Scholar
  27. 27.
    Wörgötter F, Agostini A, Krüger N, Shylo N, Porr B (2009) Cognitive agents—a procedural perspective relying on the predictability of Object-Action-Complexes (OACs). Robot Auton Syst 57(4):420–432CrossRefGoogle Scholar

Copyright information

© Gesellschaft für Informatik e.V. and Springer-Verlag GmbH Germany, part of Springer Nature 2019

Authors and Affiliations

  1. 1.Faculty of Electrical and Computer Engineering, Institute for Cognitive SystemsTechnical University of MunichMunichGermany

Personalised recommendations