A Semantic-Based Method for Teaching Industrial Robots New Tasks
This paper presents the results of the Artificial Intelligence (AI) method developed during the European project “Factory-in-a-day”. Advanced AI solutions, as the one proposed, allow a natural Human–Robot-collaboration, which is an important capability of robots in industrial warehouses. This new generation of robots is expected to work in heterogeneous production lines by efficiently interacting and collaborating with human co-workers in open and unstructured dynamic environments. For this, robots need to understand and recognize the demonstrations from different operators. Therefore, a flexible and modular process to program industrial robots has been developed based on semantic representations. This novel learning by demonstration method enables non-expert operators to program new tasks on industrial robots.
KeywordsSemantic representations Knowledge and reasoning Teaching by demonstration
We would like to thank our colleagues Katharina Stadler and Wibke Borngesser for all their support during the project Factory-in-a-day.
This work was supported by the European Community Seventh Framework Programme (FP7/2007-2013) under Grant agreement no. 609206 and it has been (partially) supported by the German Research Foundation DFG, as part of Collaborative Research Center (Sonderforschungsbereich) 1320 “EASE—Everyday Activity Science and Engineering”, University of Bremen.
- 3.Antol S, Zitnick CL, Parikh D (2014) Zero-shot learning via visual abstraction. In: Fleet D, Pajdla T, Schiele B, Tuytelaars T (eds) Computer vision – ECCV 2014. Lecture notes in computer science, vol 8692. Springer, ChamGoogle Scholar
- 4.Bates T, Ramirez-Amaro K, Inamura T, Cheng G (2017) On-line simultaneous learning and recognition of everyday activities from virtual reality performances. In: IEEE/RSJ international conference on intelligent robots and systems (IROS), pp 3510–3515. IEEEGoogle Scholar
- 5.Beetz M, Tenorth M, Jain D, Bandouch J (2010) Towards automated models of activities of daily life. Technol Disab 22:27–40Google Scholar
- 6.Billard A, Calinon S, Dillmann R, Schaal S (2008) Robot programming by demonstration. In: Siciliano B, Khatib O (eds) Springer handbook of robotics. Springer, Berlin, HeidelbergGoogle Scholar
- 8.Cheng G, Ramirez-Amaro K, Beetz M, Kuniyoshi Y (2019) Purposive learning: robot reasoning about the meanings of human activities. Sci Robot 4(26). https://doi.org/10.1126/scirobotics.aav1530
- 9.Dean-Leon E, Pierce B, Bergner F, Mittendorfer P, Ramirez-Amaro K, Burger W, Cheng G (2017) TOMM: tactile omnidirectional mobile manipulator. In: IEEE international conference on robotics and automation (ICRA), pp 2441–2447Google Scholar
- 11.Dean-Leon EC, Ramirez-Amaro K, Bergner F, Dianov I, Lanillos P, Cheng G (2016) Robotic technologies for fast deployment of industrial robot systems. In: IECON, IEEE, pp 6900–6907Google Scholar
- 12.Dianov I, Ramírez-Amaro K, Lanillos P, Dean-Leon E, Bergner F, Cheng G (2016) Extracting general task structures to accelerate the learning of new tasks. In: IEEE-RAS 16th international conference on humanoid robots (Humanoids), pp 802–807Google Scholar
- 13.Dillmann R, Asfour T, Do M, Jäkel R, Kasper A, Azad P, Ude A, Schmidt-Rohr SR, Lösch M (2010) Advances in robot programming by demonstration. KI 24(4):295–303Google Scholar
- 14.Ko WKH, Wu Y, Tee KP, Buchli J (2015) Towards industrial robot learning from demonstration. In: Lee M, Omori T, Osawa H, Park H, Young JE (eds) HAI. ACM, New York, pp 235–238Google Scholar
- 16.Kuniyoshi Y, Inoue H (1993) Qualitative recognition of ongoing human action sequences. In: Bajcsy R (ed) IJCAI. Morgan Kaufmann, Burlington, pp 1600–1609Google Scholar
- 17.Lei J, Ren X, Fox D (2012) Fine-Grained Kitchen Activity Recognition using RGB-D. In: The 14th international conference on ubiquitous computing (Ubicomp 2012)Google Scholar
- 18.Quinlan R (1993) C4.5: programs for machine learning. Morgan Kaufmann Publishers, San MateoGoogle Scholar
- 19.Ramirez-Amaro K, Beetz M, Cheng G (2014) Automatic segmentation and recognition of human activities from observation based on semantic reasoning. In: IEEE/RSJ international conference on intelligent robots and systems (IROS), IEEE, pp 5043–5048Google Scholar
- 22.Ramirez-Amaro K, Dean-Leon EC, Dianov I, Bergner F, Cheng G (2016) General recognition models capable of integrating multiple sensors for different domains. In: Humanoids, IEEE, pp 306–311Google Scholar
- 23.Ramirez-Amaro K, Inamura T, Dean-Leon EC, Beetz M, Cheng G (2014) Bootstrapping humanoid robot skills by extracting semantic representations of human-like activities from virtual reality. In: Humanoids, IEEE, pp 438–443Google Scholar
- 25.Summers-Stay D, Teo CL, Yang Y, Fermüller C, Aloimonos Y (2012) Using a minimal action grammar for activity understanding in the real world. In: IEEE/RSJ international conference on intelligent robots and systems (IROS), IEEE, pp 4104–4111Google Scholar