Advertisement

e & i Elektrotechnik und Informationstechnik

, Volume 136, Issue 7, pp 307–312 | Cite as

Sicherheit vernetzter, hochautomatisierter Roboter

  • Willibald KrennEmail author
Originalarbeit Originalarbeit
  • 43 Downloads

Zusammenfassung

Der Artikel beleuchtet aktuelle Herausforderungen an die operationelle Sicherheit (Safety) und die Cyber-Sicherheit (Security) von hochautomatisierten und vernetzten Robotern. Dabei sind nicht nur klassische Industrieroboter im Fokus des Beitrags, sondern Roboter im Allgemeinen, wie zum Beispiel (teil-)automatisierte Fahrzeuge. Nach einer kurzen Einführung in grundlegende Herausforderungen und den Stand der Technik werden aktuelle Entwicklungen aus Forschung und Standardisierung vorgestellt. Dabei wird insbesondere auf die, gerade bei hochautomatisierten Robotern vorhandene, starke Überlappung mit den Themen Cybersecurity und Künstliche Intelligenz Rücksicht genommen und der Frage nach Sicherheit und Nachvollziehbarkeit – Stichwort „Explainable-AI“ – der Systeme nachgegangen. Der Artikel basiert unter anderem auf Erkenntnissen der Forschungsprojekte Enable-S3 und Productive4.0 und ist eine erweiterte und aktualisierte Version des gleichlautenden Vortrags am IT-Kolloquium 2019.

Schlüsselwörter

Safety & Security Robotik Künstliche Intelligenz Explainable-AI Cybersecurity Verifikation Enable-S3 Productive4.0 

Safety & security of connected and highly automated robots

Abstract

The paper presents current challenges to the safety and security of highly automated and connected robots. The term robot is used in its generic from and covers all systems from conventional industrial robots to highly automated vehicles. After a quick introduction to the basic challenges and the current state-of-the-art, the article presents current research and standardisation activities. A special focus is given to cyber security and artificial intelligence (AI) that play important roles for connected and highly automated systems. In the case of AI, the article also looks at the topic of “explainability” that could become an important concept when using AI in safety critical systems. The article is based on research results of the projects Enable-S3 and Productive4.0 and is an extended and updated version of a talk given at the IT-Kolloquium 2019 in Vienna.

Keywords

Safety & Security robots artificial intelligence Explainable-AI cyber security verification Enable-S3 Productive4.0 

Notes

Danksagung

Part of the work has received funding from the EU ARTEMIS/ECSEL Joint Undertaking under grant agreements n° 692455 (Enable-S3) and n° 737459 (Productive4.0), and from both, the EC ECSEL JU and the partners’ national programmes/funding authorities (in Austria FFG (Austrian Research Promotion Agency) on behalf of BMVIT, The Federal Ministry of Transport, Innovation and Technology), grant agreements n° 853308 (Enable-S3) and n° 858992 (Productive4.0) and the Horizon 2020 Programme of the EC.

Literatur

  1. 1.
    Pearl, J. (2009): Causality: models, reasoning, and inference. Cambridge: Cambridge University Press.  https://doi.org/10.1017/CBO9780511803161. CrossRefzbMATHGoogle Scholar
  2. 2.
    Philips launches Azurion with FlexArm to set new standard for the future of image-guided procedures. Available at https://www.philips.com/a-w/about/news/archive/standard/news/press/2019/20190117-philips-launches-azurion-with-flexarm-to-set-new-standard-for-the-future-of-image-guided-procedures.html. (Accessed: 17th July 2019).
  3. 3.
    Belton, P. (2016): In the future, will farming be fully automated? Google Scholar
  4. 4.
    Norwegian shipbuilder to take part in robot vessel project (2017): Available at https://www.thelocal.no/20170711/norwegian-shipbuilder-to-take-part-in-robot-vessel-project. (Accessed: 17th July 2019).
  5. 5.
    Ltd, D. M. I. Rio Tinto completes AutoHaul autonomous train project. Railway Gazette. Available at https://www.railwaygazette.com/news/news/australasia/single-view/view/rio-tinto-completes-autohaul-autonomous-train-project.html. (Accessed: 17th July 2019).
  6. 6.
    Team W. (2019): EmTech Digital Recap: how ai makes self-driving cars possible. Medium. Google Scholar
  7. 7.
    automatica-Trends: Vom Cobot bis zur vernetzten Fabrik automatica: Robotik-Innovationen für smarte Produktion. Automationspraxis (2018): Available at: https://automationspraxis.industrie.de/servicerobotik/automatica-robotik-innovationen-fuer-smarte-produktion/. (Accessed: 17th July 2019).
  8. 8.
    Spot | Boston dynamics. Available at: https://www.bostondynamics.com/spot. (Accessed: 17th July 2019).
  9. 9.
    Explainable artificial intelligence. Available at: https://www.darpa.mil/program/explainable-artificial-intelligence. (Accessed: 22nd July 2019).
  10. 10.
    Qiu, S., Liu, Q., Zhou, S., Wu, C. (2019): Review of artificial intelligence adversarial attack and defense technologies. Appl. Sci., 9, 909. CrossRefGoogle Scholar
  11. 11.
    Goodfellow, I. J., Shlens, J., Szegedy, C. (2014): Explaining and harnessing adversarial examples. arXiv:1412.6572 [cs, stat].
  12. 12.
    Eykholt, K., et al. (2018): Robust physical-world attacks on deep learning visual classification. In Proceedings of the IEEE conference on computer vision and pattern recognition (S. 1625–1634). Google Scholar
  13. 13.
    Sharif, M., Bhagavatula, S., Bauer, L., Reiter, M. K. (2016): Accessorize to a crime: real and stealthy attacks on state-of-the-art face recognition. In Proceedings of the 2016 ACM SIGSAC conference on computer and communications security, ACM, 2016 (S. 1528–1540). Google Scholar
  14. 14.
    Osborne, C. The most interesting Internet-connected vehicle hacks on record. ZDNet Available at: https://www.zdnet.com/article/these-are-the-most-interesting-ways-to-hack-internet-connected-vehicles/. (Accessed: 19th July 2019).
  15. 15.
    Maggi, F., et al. (2017): Rogue robots: testing the limits of an industrial robot’s security. Trend Micro, Politecnico di Milano, Tech. Rep. Google Scholar
  16. 16.
  17. 17.
    Netz- und Informationssicherheitsbüro (NIS-Büro) – Bundeskanzleramt Österreich. Available at: https://www.bundeskanzleramt.gv.at/themen/cyber-sicherheit-egovernment/nis-buero.html. (Accessed: 22nd July 2019).
  18. 18.
    ACS – Informationspool – Industrial Control System Security: Top 10 Bedrohungen und Gegenmaßnahmen v1.3. Available at: https://www.allianz-fuer-cybersicherheit.de/ACS/DE/_/downloads/BSI-CS_005.html;jsessionid=10E7DE069783CAB038DDB09EE20EC1ED.2_cid341. (Accessed: 17th July 2019).
  19. 19.
    Netz- und Informationssystemsicherheitsverordnung – NISV. Available at: https://www.ris.bka.gv.at/Dokumente/BgblAuth/BGBLA_2019_II_215/BGBLA_2019_II_215.html. (Accessed: 22nd July 2019).
  20. 20.
    RIS – Netz- und Informationssystemsicherheitsgesetz – Bundesrecht konsolidiert. Fassung vom 22.07.2019. Available at: https://www.ris.bka.gv.at/GeltendeFassung.wxe?Abfrage=Bundesnormen&Gesetzesnummer=20010536. (Accessed: 22nd July 2019).
  21. 21.
    Shostack, A. (2014): Threat modeling: designing for security. New York: Wiley. Google Scholar
  22. 22.
    Schmittner, C., et al. (2019): Threat modeling in the railway domain. In Reliability, safety, and security of railway systems: modelling, analysis, verification, and certification – proceedings third international conference, RSSRail 2019, Lille, France, June 4-6, 2019, (S. 261–271).  https://doi.org/10.1007/978-3-030-18744-6_17. CrossRefGoogle Scholar
  23. 23.
    Strobl, S., et al. (2018): Connected cars – threats, vulnerabilities and their impact. In IEEE Industrial cyber-physical systems, ICPS 2018, Saint Petersburg, Russia, May 15–18, 2018 (S. 375–380).  https://doi.org/10.1109/ICPHYS.2018.8387687. CrossRefGoogle Scholar
  24. 24.
    Khan, R., McLaughlin, K., Laverty, D., Sezer, S. (2017): STRIDE-based threat modeling for cyber-physical systems. In 2017 IEEE PES innovative smart grid technologies conference Europe, ISGT-Europe (S. 1–6). IEEE Press: New York.  https://doi.org/10.1109/ISGTEurope.2017.8260283. CrossRefGoogle Scholar
  25. 25.
    ThreatGet. Available at: https://www.threatget.com/. (Accessed: 30th July 2019).
  26. 26.
    Microsoft security development lifecycle threat modelling. Available at: https://www.microsoft.com/en-us/securityengineering/sdl/threatmodeling. (Accessed: 30th July 2019).
  27. 27.
    Schmittner, C., Ma, Z., Schoitsch, E., Gruber, T. (2015): A case study of FMVEA and CHASSIS as safety and security co-analysis method for automotive cyber-physical systems. In Proceedings of the 1st ACM workshop on cyber-physical system security, CPSS 2015, Singapore, Republic of Singapore, April 14–March 14, 2015 (S. 69–80).  https://doi.org/10.1145/2732198.2732204. CrossRefGoogle Scholar
  28. 28.
    Chen, B., et al. (2015): Security analysis of urban railway systems: the need for a cyber-physical perspective. In F. Koornneef, C. van Gulijk (Hrsg.), Computer safety, reliability, and security (S. 277–290). Berlin: Springer. CrossRefGoogle Scholar
  29. 29.
    Anonymous (2019): Ethics guidelines for trustworthy AI. Digital Single Market. European Commission. Available at: https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai. (Accessed: 30th July 2019).
  30. 30.
    Ribeiro, M. T., Singh, S., Guestrin, C. (2016): Why should i trust you? Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, ACM, 2016 (S. 1135–1144). CrossRefGoogle Scholar
  31. 31.
    Norvig, P. (2016): Verification and validation of AI software. Google Scholar
  32. 32.
    Sculley, D., et al. (2014): Machine learning: the high interest credit card of technical debt. In SE4ML: software engineering for machine learning, NIPS 2014 workshop. Google Scholar
  33. 33.
    Zayd’s blog – Why is machine learning ‘hard’? Available at: http://ai.stanford.edu/~zayd/why-is-machine-learning-hard.html. (Accessed: 30th July 2019).
  34. 34.
    Patterson, S. M. (2016): Google AI expert explains the challenge of debugging machine-learning systems. Network World. Available at: https://www.networkworld.com/article/3075413/google-ai-expert-explains-the-challenge-of-debugging-machine-learning-systems.html. (Accessed: 31st July 2019).
  35. 35.
    Dissemination material – Enable S3. Available at: https://www.enable-s3.eu/media/dissemination-material/. (Accessed: 31st July 2019).
  36. 36.
    ENABLE-S3 – VV-patterns. Available at: https://vvpatterns.ait.ac.at/enable-s3/. (Accessed: 1st August 2019).
  37. 37.
    ENABLE-S3_SummaryofResults_May2019.pdf. Google docs. Available at: https://drive.google.com/file/d/15c1Oe69dpvW5dma8-uS8hev17x-6V3zU/view?usp=sharing&usp=embed_facebook. (Accessed: 1st August 2019).

Copyright information

© Springer-Verlag GmbH Austria, ein Teil von Springer Nature 2019

Authors and Affiliations

  1. 1.AIT Austrian Institute of Technology GmbHViennaAustria

Personalised recommendations