Skip to main content

A Human-in-the-Loop Perspective for Safety Assessment in Robotic Applications

  • Conference paper
  • First Online:
Perspectives of System Informatics (PSI 2017)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 10742))

Abstract

Human-Robot Collaborative (HRC) applications pose new challenges in the assessment of their safety, due to the close interaction between robots and human operators. This entails that a human-in-the-loop perspective must be taken, at both the design and the operation level, when assessing the safety of these applications. In this paper we present an extension of a tool-supported methodology compatible with current ISO 10218-2 standard, called SAFER-HRC, which: (i) takes into account the possible behaviors of human operators—such as mistakes and misuses while working with the robot (operational level)—and (ii) exploits the expertise of safety engineers in order to incrementally update and adjust the model of the system (design level). The methodology is supported by a tool that allows designers to formally verify the modeled HRC applications in search of safety violations in an iterative manner.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    A safety constraint is achieved by a safety function in charge of reliably accomplishing the risk reduction objective. The reliability level is defined according to analysis and methods of functional safety, as for ISO 13849, for instance. In this case the safety function is to monitor the position of motors so as to prevent unwanted motion from the desired resting position.

References

  1. Zot: a bounded satisfiability checker. github.com/fm-polimi/zot

  2. Askarpour, M., Mandrioli, D., Rossi, M., Vicentini, F.: SAFER-HRC: safety analysis through formal vERification in human-robot collaboration. In: Skavhaug, A., Guiochet, J., Bitsch, F. (eds.) SAFECOMP 2016. LNCS, vol. 9922, pp. 283–295. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-45477-1_22

    Chapter  Google Scholar 

  3. Baresi, L., Pourhashem Kallehbasti, M.M., Rossi, M.: Efficient scalable verification of LTL specifications. In: Proceedings of Software Engineering (2015)

    Google Scholar 

  4. Bouti, A., Kadi, D.A.: A state-of-the-art review of FMEA/FMECA. Int. J. Reliab. Qual. Saf. Eng. 1, 515 (1994)

    Article  Google Scholar 

  5. Bredereke, J., Lankenau, A.: Safety-relevant mode confusions modelling and reducing them. Reliab. Eng. Syst. Saf. 88(3), 229–245 (2005)

    Article  Google Scholar 

  6. Butterworth, R., Blandford, A., Duke, D.J.: Demonstrating the cognitive plausibility of interactive system specifications. Formal Asp. Comput. 12, 237–259 (2000)

    Article  MATH  Google Scholar 

  7. Dhillon, B.S., Fashandi, A.R.M.: Safety and reliability assessment techniques in robotics. Robotica 15, 701–708 (1997)

    Article  Google Scholar 

  8. Dixon, C., Webster, M., Saunders, J., Fisher, M., Dautenhahn, K.: “The Fridge Door is Open”–temporal verification of a robotic assistant’s behaviours. In: Mistry, M., Leonardis, A., Witkowski, M., Melhuish, C. (eds.) TAROS 2014. LNCS (LNAI), vol. 8717, pp. 97–108. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10401-0_9

    Google Scholar 

  9. Fu, J., Topcu, U.: Synthesis of shared autonomy policies with temporal logic specifications. IEEE Trans. Autom. Sci. Eng. 13(1), 7–17 (2016)

    Article  Google Scholar 

  10. Furia, C.A., Mandrioli, D., Morzenti, A., Rossi, M.: Modeling Time in Computing. Monographs in Theoretical Computer Science. An EATCS Series. Springer, Heidelberg (2012)

    Book  MATH  Google Scholar 

  11. Guiochet, J.: Hazard analysis of human-robot interactions with HAZOP-UML. Saf. Sci. 225–237 (2016). abs/1602.03139

  12. Guiochet, J., Do Hoang, Q.A., Kaaniche, M., Powell, D.: Model-based safety analysis of human-robot interactions: the MIRAS walking assistance robot. In: Proceedings of ICORR (2013)

    Google Scholar 

  13. International Electrotechnical Commission: IEC 61882, Hazard and operability studies (HAZOP studies)-Application guide (2001)

    Google Scholar 

  14. International Standard Organisation: ISO10218-2:2011, Robots and robotic devices - Safety requirements for industrial robots - Part 2: Robot Systems and Integration

    Google Scholar 

  15. International Standard Organisation: ISO14121-2:2007, Safety of machinery - Risk assessment - Part 2

    Google Scholar 

  16. International Standard Organisation: ISO15066:2016, Robots and robotic devices - Collaborative robots

    Google Scholar 

  17. Leveson, N.: Engineering a Safer World: Systems Thinking Applied to Safety. MIT Press, Cambridge (2011)

    Google Scholar 

  18. Machin, M., Dufossé, F., Blanquart, J.-P., Guiochet, J., Powell, D., Waeselynck, H.: Specifying safety monitors for autonomous systems using model-checking. In: Bondavalli, A., Di Giandomenico, F. (eds.) SAFECOMP 2014. LNCS, vol. 8666, pp. 262–277. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10506-2_18

    Google Scholar 

  19. Machin, M., Dufossé, F., Guiochet, J., Powell, D., Roy, M., Waeselynck, H.: Model-checking and game theory for synthesis of safety rules. In: Proceedings of HASE (2015)

    Google Scholar 

  20. Martin-Guillerez, D., Guiochet, J., Powell, D., Zanon, C.: A UML-based method for risk analysis of human-robot interactions. In: Proceedings of SERENE. ACM (2010)

    Google Scholar 

  21. Pouliezos, A., Stavrakakis, G.S.: Fast fault diagnosis for industrial processes applied to the reliable operation of robotic systems. Int. J. Syst. Sci. 20, 1233–1257 (1989)

    Article  MATH  Google Scholar 

  22. Salem, M., Lakatos, G., Amirabdollahian, F., Dautenhahn, K.: Would you trust a (faulty) robot?: effects of error, task type and personality on human-robot cooperation and trust. In: Proceedings of ACM/IEEE Human-Robot Interaction, HRI (2015)

    Google Scholar 

  23. Sharma, T.C., Bazovsky, I.: Reliability analysis of large system by Markov techniques. In: Proceedings of the Symposium on Reliability and Maintainability (1993)

    Google Scholar 

  24. Sierhuis, M., Clancey, W.J., Hoof, R.J.V.: Brahms: a multi-agent modelling environment for simulating work processes and practices. Int. J. Simul. Process Model. 3, 134–152 (2007)

    Article  Google Scholar 

  25. Stocker, R., Dennis, L., Dixon, C., Fisher, M.: Verifying brahms human-robot teamwork models. In: del Cerro, L.F., Herzig, A., Mengin, J. (eds.) JELIA 2012. LNCS (LNAI), vol. 7519, pp. 385–397. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-33353-8_30

    Chapter  Google Scholar 

  26. Webster, M., Dixon, C., Fisher, M., Salem, M., Saunders, J., Koay, K., Dautenhahn, K.: Formal verification of an autonomous personal robotic assistant. In: Formal Verification and Modeling in Human-Machine Systems (2014)

    Google Scholar 

  27. Webster, M., Dixon, C., Fisher, M., Salem, M., Saunders, J., Koay, K.L., Dautenhahn, K., Saez-Pons, J.: Toward reliable autonomous robotic assistants through formal verification: a case study. IEEE Trans. Hum. Mach. Syst. 46, 186–196 (2016)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mehrnoosh Askarpour .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Askarpour, M., Mandrioli, D., Rossi, M., Vicentini, F. (2018). A Human-in-the-Loop Perspective for Safety Assessment in Robotic Applications. In: Petrenko, A., Voronkov, A. (eds) Perspectives of System Informatics. PSI 2017. Lecture Notes in Computer Science(), vol 10742. Springer, Cham. https://doi.org/10.1007/978-3-319-74313-4_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-74313-4_2

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-74312-7

  • Online ISBN: 978-3-319-74313-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics