e & i Elektrotechnik und Informationstechnik

, Volume 136, Issue 7, pp 307–312 | Cite as

Sicherheit vernetzter, hochautomatisierter Roboter

  • Willibald KrennEmail author
Originalarbeit Originalarbeit


Der Artikel beleuchtet aktuelle Herausforderungen an die operationelle Sicherheit (Safety) und die Cyber-Sicherheit (Security) von hochautomatisierten und vernetzten Robotern. Dabei sind nicht nur klassische Industrieroboter im Fokus des Beitrags, sondern Roboter im Allgemeinen, wie zum Beispiel (teil-)automatisierte Fahrzeuge. Nach einer kurzen Einführung in grundlegende Herausforderungen und den Stand der Technik werden aktuelle Entwicklungen aus Forschung und Standardisierung vorgestellt. Dabei wird insbesondere auf die, gerade bei hochautomatisierten Robotern vorhandene, starke Überlappung mit den Themen Cybersecurity und Künstliche Intelligenz Rücksicht genommen und der Frage nach Sicherheit und Nachvollziehbarkeit – Stichwort „Explainable-AI“ – der Systeme nachgegangen. Der Artikel basiert unter anderem auf Erkenntnissen der Forschungsprojekte Enable-S3 und Productive4.0 und ist eine erweiterte und aktualisierte Version des gleichlautenden Vortrags am IT-Kolloquium 2019.


Safety & Security Robotik Künstliche Intelligenz Explainable-AI Cybersecurity Verifikation Enable-S3 Productive4.0 

Safety & security of connected and highly automated robots


The paper presents current challenges to the safety and security of highly automated and connected robots. The term robot is used in its generic from and covers all systems from conventional industrial robots to highly automated vehicles. After a quick introduction to the basic challenges and the current state-of-the-art, the article presents current research and standardisation activities. A special focus is given to cyber security and artificial intelligence (AI) that play important roles for connected and highly automated systems. In the case of AI, the article also looks at the topic of “explainability” that could become an important concept when using AI in safety critical systems. The article is based on research results of the projects Enable-S3 and Productive4.0 and is an extended and updated version of a talk given at the IT-Kolloquium 2019 in Vienna.


Safety & Security robots artificial intelligence Explainable-AI cyber security verification Enable-S3 Productive4.0 



Part of the work has received funding from the EU ARTEMIS/ECSEL Joint Undertaking under grant agreements n° 692455 (Enable-S3) and n° 737459 (Productive4.0), and from both, the EC ECSEL JU and the partners’ national programmes/funding authorities (in Austria FFG (Austrian Research Promotion Agency) on behalf of BMVIT, The Federal Ministry of Transport, Innovation and Technology), grant agreements n° 853308 (Enable-S3) and n° 858992 (Productive4.0) and the Horizon 2020 Programme of the EC.


  1. 1.
    Pearl, J. (2009): Causality: models, reasoning, and inference. Cambridge: Cambridge University Press. CrossRefzbMATHGoogle Scholar
  2. 2.
    Philips launches Azurion with FlexArm to set new standard for the future of image-guided procedures. Available at (Accessed: 17th July 2019).
  3. 3.
    Belton, P. (2016): In the future, will farming be fully automated? Google Scholar
  4. 4.
    Norwegian shipbuilder to take part in robot vessel project (2017): Available at (Accessed: 17th July 2019).
  5. 5.
    Ltd, D. M. I. Rio Tinto completes AutoHaul autonomous train project. Railway Gazette. Available at (Accessed: 17th July 2019).
  6. 6.
    Team W. (2019): EmTech Digital Recap: how ai makes self-driving cars possible. Medium. Google Scholar
  7. 7.
    automatica-Trends: Vom Cobot bis zur vernetzten Fabrik automatica: Robotik-Innovationen für smarte Produktion. Automationspraxis (2018): Available at: (Accessed: 17th July 2019).
  8. 8.
    Spot | Boston dynamics. Available at: (Accessed: 17th July 2019).
  9. 9.
    Explainable artificial intelligence. Available at: (Accessed: 22nd July 2019).
  10. 10.
    Qiu, S., Liu, Q., Zhou, S., Wu, C. (2019): Review of artificial intelligence adversarial attack and defense technologies. Appl. Sci., 9, 909. CrossRefGoogle Scholar
  11. 11.
    Goodfellow, I. J., Shlens, J., Szegedy, C. (2014): Explaining and harnessing adversarial examples. arXiv:1412.6572 [cs, stat].
  12. 12.
    Eykholt, K., et al. (2018): Robust physical-world attacks on deep learning visual classification. In Proceedings of the IEEE conference on computer vision and pattern recognition (S. 1625–1634). Google Scholar
  13. 13.
    Sharif, M., Bhagavatula, S., Bauer, L., Reiter, M. K. (2016): Accessorize to a crime: real and stealthy attacks on state-of-the-art face recognition. In Proceedings of the 2016 ACM SIGSAC conference on computer and communications security, ACM, 2016 (S. 1528–1540). Google Scholar
  14. 14.
    Osborne, C. The most interesting Internet-connected vehicle hacks on record. ZDNet Available at: (Accessed: 19th July 2019).
  15. 15.
    Maggi, F., et al. (2017): Rogue robots: testing the limits of an industrial robot’s security. Trend Micro, Politecnico di Milano, Tech. Rep. Google Scholar
  16. 16.
  17. 17.
    Netz- und Informationssicherheitsbüro (NIS-Büro) – Bundeskanzleramt Österreich. Available at: (Accessed: 22nd July 2019).
  18. 18.
    ACS – Informationspool – Industrial Control System Security: Top 10 Bedrohungen und Gegenmaßnahmen v1.3. Available at:;jsessionid=10E7DE069783CAB038DDB09EE20EC1ED.2_cid341. (Accessed: 17th July 2019).
  19. 19.
    Netz- und Informationssystemsicherheitsverordnung – NISV. Available at: (Accessed: 22nd July 2019).
  20. 20.
    RIS – Netz- und Informationssystemsicherheitsgesetz – Bundesrecht konsolidiert. Fassung vom 22.07.2019. Available at: (Accessed: 22nd July 2019).
  21. 21.
    Shostack, A. (2014): Threat modeling: designing for security. New York: Wiley. Google Scholar
  22. 22.
    Schmittner, C., et al. (2019): Threat modeling in the railway domain. In Reliability, safety, and security of railway systems: modelling, analysis, verification, and certification – proceedings third international conference, RSSRail 2019, Lille, France, June 4-6, 2019, (S. 261–271). CrossRefGoogle Scholar
  23. 23.
    Strobl, S., et al. (2018): Connected cars – threats, vulnerabilities and their impact. In IEEE Industrial cyber-physical systems, ICPS 2018, Saint Petersburg, Russia, May 15–18, 2018 (S. 375–380). CrossRefGoogle Scholar
  24. 24.
    Khan, R., McLaughlin, K., Laverty, D., Sezer, S. (2017): STRIDE-based threat modeling for cyber-physical systems. In 2017 IEEE PES innovative smart grid technologies conference Europe, ISGT-Europe (S. 1–6). IEEE Press: New York. CrossRefGoogle Scholar
  25. 25.
    ThreatGet. Available at: (Accessed: 30th July 2019).
  26. 26.
    Microsoft security development lifecycle threat modelling. Available at: (Accessed: 30th July 2019).
  27. 27.
    Schmittner, C., Ma, Z., Schoitsch, E., Gruber, T. (2015): A case study of FMVEA and CHASSIS as safety and security co-analysis method for automotive cyber-physical systems. In Proceedings of the 1st ACM workshop on cyber-physical system security, CPSS 2015, Singapore, Republic of Singapore, April 14–March 14, 2015 (S. 69–80). CrossRefGoogle Scholar
  28. 28.
    Chen, B., et al. (2015): Security analysis of urban railway systems: the need for a cyber-physical perspective. In F. Koornneef, C. van Gulijk (Hrsg.), Computer safety, reliability, and security (S. 277–290). Berlin: Springer. CrossRefGoogle Scholar
  29. 29.
    Anonymous (2019): Ethics guidelines for trustworthy AI. Digital Single Market. European Commission. Available at: (Accessed: 30th July 2019).
  30. 30.
    Ribeiro, M. T., Singh, S., Guestrin, C. (2016): Why should i trust you? Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, ACM, 2016 (S. 1135–1144). CrossRefGoogle Scholar
  31. 31.
    Norvig, P. (2016): Verification and validation of AI software. Google Scholar
  32. 32.
    Sculley, D., et al. (2014): Machine learning: the high interest credit card of technical debt. In SE4ML: software engineering for machine learning, NIPS 2014 workshop. Google Scholar
  33. 33.
    Zayd’s blog – Why is machine learning ‘hard’? Available at: (Accessed: 30th July 2019).
  34. 34.
    Patterson, S. M. (2016): Google AI expert explains the challenge of debugging machine-learning systems. Network World. Available at: (Accessed: 31st July 2019).
  35. 35.
    Dissemination material – Enable S3. Available at: (Accessed: 31st July 2019).
  36. 36.
    ENABLE-S3 – VV-patterns. Available at: (Accessed: 1st August 2019).
  37. 37.
    ENABLE-S3_SummaryofResults_May2019.pdf. Google docs. Available at: (Accessed: 1st August 2019).

Copyright information

© Springer-Verlag GmbH Austria, ein Teil von Springer Nature 2019

Authors and Affiliations

  1. 1.AIT Austrian Institute of Technology GmbHViennaAustria

Personalised recommendations