Abstract
Human-Robot Collaborative (HRC) applications pose new challenges in the assessment of their safety, due to the close interaction between robots and human operators. This entails that a human-in-the-loop perspective must be taken, at both the design and the operation level, when assessing the safety of these applications. In this paper we present an extension of a tool-supported methodology compatible with current ISO 10218-2 standard, called SAFER-HRC, which: (i) takes into account the possible behaviors of human operators—such as mistakes and misuses while working with the robot (operational level)—and (ii) exploits the expertise of safety engineers in order to incrementally update and adjust the model of the system (design level). The methodology is supported by a tool that allows designers to formally verify the modeled HRC applications in search of safety violations in an iterative manner.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
A safety constraint is achieved by a safety function in charge of reliably accomplishing the risk reduction objective. The reliability level is defined according to analysis and methods of functional safety, as for ISO 13849, for instance. In this case the safety function is to monitor the position of motors so as to prevent unwanted motion from the desired resting position.
References
Zot: a bounded satisfiability checker. github.com/fm-polimi/zot
Askarpour, M., Mandrioli, D., Rossi, M., Vicentini, F.: SAFER-HRC: safety analysis through formal vERification in human-robot collaboration. In: Skavhaug, A., Guiochet, J., Bitsch, F. (eds.) SAFECOMP 2016. LNCS, vol. 9922, pp. 283–295. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-45477-1_22
Baresi, L., Pourhashem Kallehbasti, M.M., Rossi, M.: Efficient scalable verification of LTL specifications. In: Proceedings of Software Engineering (2015)
Bouti, A., Kadi, D.A.: A state-of-the-art review of FMEA/FMECA. Int. J. Reliab. Qual. Saf. Eng. 1, 515 (1994)
Bredereke, J., Lankenau, A.: Safety-relevant mode confusions modelling and reducing them. Reliab. Eng. Syst. Saf. 88(3), 229–245 (2005)
Butterworth, R., Blandford, A., Duke, D.J.: Demonstrating the cognitive plausibility of interactive system specifications. Formal Asp. Comput. 12, 237–259 (2000)
Dhillon, B.S., Fashandi, A.R.M.: Safety and reliability assessment techniques in robotics. Robotica 15, 701–708 (1997)
Dixon, C., Webster, M., Saunders, J., Fisher, M., Dautenhahn, K.: “The Fridge Door is Open”–temporal verification of a robotic assistant’s behaviours. In: Mistry, M., Leonardis, A., Witkowski, M., Melhuish, C. (eds.) TAROS 2014. LNCS (LNAI), vol. 8717, pp. 97–108. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10401-0_9
Fu, J., Topcu, U.: Synthesis of shared autonomy policies with temporal logic specifications. IEEE Trans. Autom. Sci. Eng. 13(1), 7–17 (2016)
Furia, C.A., Mandrioli, D., Morzenti, A., Rossi, M.: Modeling Time in Computing. Monographs in Theoretical Computer Science. An EATCS Series. Springer, Heidelberg (2012)
Guiochet, J.: Hazard analysis of human-robot interactions with HAZOP-UML. Saf. Sci. 225–237 (2016). abs/1602.03139
Guiochet, J., Do Hoang, Q.A., Kaaniche, M., Powell, D.: Model-based safety analysis of human-robot interactions: the MIRAS walking assistance robot. In: Proceedings of ICORR (2013)
International Electrotechnical Commission: IEC 61882, Hazard and operability studies (HAZOP studies)-Application guide (2001)
International Standard Organisation: ISO10218-2:2011, Robots and robotic devices - Safety requirements for industrial robots - Part 2: Robot Systems and Integration
International Standard Organisation: ISO14121-2:2007, Safety of machinery - Risk assessment - Part 2
International Standard Organisation: ISO15066:2016, Robots and robotic devices - Collaborative robots
Leveson, N.: Engineering a Safer World: Systems Thinking Applied to Safety. MIT Press, Cambridge (2011)
Machin, M., Dufossé, F., Blanquart, J.-P., Guiochet, J., Powell, D., Waeselynck, H.: Specifying safety monitors for autonomous systems using model-checking. In: Bondavalli, A., Di Giandomenico, F. (eds.) SAFECOMP 2014. LNCS, vol. 8666, pp. 262–277. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10506-2_18
Machin, M., Dufossé, F., Guiochet, J., Powell, D., Roy, M., Waeselynck, H.: Model-checking and game theory for synthesis of safety rules. In: Proceedings of HASE (2015)
Martin-Guillerez, D., Guiochet, J., Powell, D., Zanon, C.: A UML-based method for risk analysis of human-robot interactions. In: Proceedings of SERENE. ACM (2010)
Pouliezos, A., Stavrakakis, G.S.: Fast fault diagnosis for industrial processes applied to the reliable operation of robotic systems. Int. J. Syst. Sci. 20, 1233–1257 (1989)
Salem, M., Lakatos, G., Amirabdollahian, F., Dautenhahn, K.: Would you trust a (faulty) robot?: effects of error, task type and personality on human-robot cooperation and trust. In: Proceedings of ACM/IEEE Human-Robot Interaction, HRI (2015)
Sharma, T.C., Bazovsky, I.: Reliability analysis of large system by Markov techniques. In: Proceedings of the Symposium on Reliability and Maintainability (1993)
Sierhuis, M., Clancey, W.J., Hoof, R.J.V.: Brahms: a multi-agent modelling environment for simulating work processes and practices. Int. J. Simul. Process Model. 3, 134–152 (2007)
Stocker, R., Dennis, L., Dixon, C., Fisher, M.: Verifying brahms human-robot teamwork models. In: del Cerro, L.F., Herzig, A., Mengin, J. (eds.) JELIA 2012. LNCS (LNAI), vol. 7519, pp. 385–397. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-33353-8_30
Webster, M., Dixon, C., Fisher, M., Salem, M., Saunders, J., Koay, K., Dautenhahn, K.: Formal verification of an autonomous personal robotic assistant. In: Formal Verification and Modeling in Human-Machine Systems (2014)
Webster, M., Dixon, C., Fisher, M., Salem, M., Saunders, J., Koay, K.L., Dautenhahn, K., Saez-Pons, J.: Toward reliable autonomous robotic assistants through formal verification: a case study. IEEE Trans. Hum. Mach. Syst. 46, 186–196 (2016)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer International Publishing AG
About this paper
Cite this paper
Askarpour, M., Mandrioli, D., Rossi, M., Vicentini, F. (2018). A Human-in-the-Loop Perspective for Safety Assessment in Robotic Applications. In: Petrenko, A., Voronkov, A. (eds) Perspectives of System Informatics. PSI 2017. Lecture Notes in Computer Science(), vol 10742. Springer, Cham. https://doi.org/10.1007/978-3-319-74313-4_2
Download citation
DOI: https://doi.org/10.1007/978-3-319-74313-4_2
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-74312-7
Online ISBN: 978-3-319-74313-4
eBook Packages: Computer ScienceComputer Science (R0)