Advertisement

Safe and Trustworthy Human-Robot Interaction

  • Dejanira Araiza-Illan
  • Kerstin Eder
Reference work entry

Abstract

To be genuinely useful, robotic assistants must be both smart and powerful. This makes them potentially dangerous, and we must consider safety and trustworthiness as primary design goals for human-assistive robots. This chapter is focused on techniques that can be used to gain confidence in the safety of code used to control robots that directly interact with humans. We include formal methods and simulation-based testing techniques as well as experimental evaluation to determine how much users actually trust robots when interacting with them in a practical setting. The complexity of verifying and validating the behavior of robots in human-robot interactions requires combining different techniques, and we discuss the benefits of doing so. As robots are being equipped with increasingly sophisticated reasoning capabilities to operate fully autonomously in open environments, it becomes more and more important that verification methods are being developed to match that level of artificial intelligence.

References

  1. 1.
    R. Alexander, H. Hawkins, D. Rae, Situation coverage – a coverage criterion for testing autonomous robots, Technical report, Department of Computer Science, University of York, 2015Google Scholar
  2. 2.
    F. Amirabdollahian, K. Dautenhahn, C. Dixon, K. Eder, M. Fisher, K.L. Koay, E. Magid, A.G. Pipe, M. Salem, J. Saunders, M. Webster, Can you trust your robotic assistant? in 5th International Conference in Social Robotics (ICSR 2013), ed. by G. Herrmann et al. Lecture Notes in Artificial Intelligence, vol. 8239 (Springer, 2013), pp. 571–573Google Scholar
  3. 3.
    L.R. Antuña, D. Araiza-Illan, S. Campos, K. Eder, Symmetry reduction enables model checking of more complex emergent behaviours of swarm navigation algorithms, in Towards Autonomous Robotic Systems. TAROS 2015, ed. by C. Dixon, K. Tuyls. Lecture Notes in Computer Science, vol. 9287 (Springer, Cham, 2015), pp. 26–37CrossRefGoogle Scholar
  4. 4.
    D. Araiza-Illan, K. Eder, A. Richards, Verification of control systems implemented in Simulink with assertion checks and theorem proving: a case study, in Proceedings of the European Control Conference (ECC), 2015, pp. 2670–2675Google Scholar
  5. 5.
    D. Araiza-Illan, A.G. Pipe, K. Eder, Intelligent agent-based stimulation for testing robotic software in human-robot interactions, in Proceedings of the 3rd Workshop on Model-Driven Robot Software Engineering (MORSE’16), ed. by U. Aßmann, D. Brugali, C. Piechnick (ACM, New York, 2016), pp. 9–16.  https://doi.org/10.1145/3022099.3022101
  6. 6.
    D. Araiza-Illan, D. Western, A. Pipe, K. Eder, Coverage-driven verification — an approach to verify code for robots that directly interact with humans, in Hardware and Software: Verification and Testing. Lecture Notes in Computer Science, vol 9434 (Springer, 2015), pp. 69–84.  https://doi.org/10.1007/978-3-319-26287-1_5CrossRefGoogle Scholar
  7. 7.
    D. Araiza-Illan, D. Western, A. Pipe, K. Eder, Systematic and realistic testing in simulation of control code for robots in collaborative human-robot interactions, in Proceedings of the 17th Annual Conference Towards Autonomous Robotic Systems (TAROS 2016). Lecture Notes in Computer Science, vol 9716 (Springer, 2016), pp. 20–32.  https://doi.org/10.1007/978-3-319-40379-3_3Google Scholar
  8. 8.
    W.A. Bainbridge, J.W. Hart, E.S. Kim, B. Scassellati, The benefits of interactions with physically present robots over video-displayed agents. Int. J. Soc. Robot. 3(1), 41–52 (2011)CrossRefGoogle Scholar
  9. 9.
    A. Bihlmaier, H. Wörn, Robot unit testing, in Simulation, Modeling, and Programming for Autonomous Robots. SIMPAR 2014, ed. by D. Brugali, J.F. Broenink, T. Kroeger, B.A. MacDonald. Lecture Notes in Computer Science, vol 8810 (Springer, Cham, 2014).  https://doi.org/10.1007/978-3-319-11900-7_22Google Scholar
  10. 10.
    M. Blow, K. Dautenhahn, A. Appleby, C.L. Nehaniv, D. Lee, The art of designing robot faces – dimensions for human-robot interaction, in Proceedings of the Conference on Human-Robot Interaction (HRI), Salt Lake City (ACM/IEEE, 2006), pp. 331–332Google Scholar
  11. 11.
    M.L. Bolton, E.J. Bass, R.I. Siminiceanu, Using formal verification to evaluate human-automation interaction: a review. IEEE Trans. Syst. Man Cybern. Syst. 43(3), 488–503 (2013)CrossRefGoogle Scholar
  12. 12.
    R. Calinescu, C. Ghezzi, M. Kwiatkowska, R. Mirandola, Self-adaptive software needs quantitative verification at runtime. Commun. ACM 55(9), 69–77 (2012)CrossRefGoogle Scholar
  13. 13.
    R. Calinescu, S. Kikuchi, Formal methods @ runtime, in Proceedings of the Monterey Workshop: Foundations of Computer Software. Modeling, Development, and Verification of Adaptive Systems, 2010, pp. 122–135CrossRefGoogle Scholar
  14. 14.
    E.M. Clarke, O. Grumberg, D.A. Peled, Model Checking (MIT Press, Cambridge, 1999)Google Scholar
  15. 15.
    M. Davis, G. Logemann, D. Loveland, A machine program for theorem-proving. Commun. ACM 5(7), 394–397 (1962)MathSciNetCrossRefGoogle Scholar
  16. 16.
    L. de Moura, B. Dutertre, N. Shankar, A tutorial on satisfiability modulo theories, in Proceedings of the International Conference on Computer Aided Verification (CAV), 2007, pp. 20–36Google Scholar
  17. 17.
    A. De Santis, B. Siciliano, A.D. Luca, A. Bicchi, An atlas of physical human–robot interaction. Mech. Mach. Theory 40(3), 253–270 (2008)Google Scholar
  18. 18.
    L.A. Dennis, M. Fisher, M.P. Webster, R.H. Bordini, Model checking agent programming languages. Autom. Softw. Eng. 19(1), 5–63 (2012)CrossRefGoogle Scholar
  19. 19.
    M. Desai, P. Kaniarasu, M. Medvedev, A. Steinfeld, H. Yanco, Impact of robot failures and feedback on real-time trust, in Proceedings of the Conference on Human-Robot Interaction (HRI), Tokyo (ACM/IEEE, 2013), pp. 251–258Google Scholar
  20. 20.
    M. Desai, M. Medvedev, M. Vázquez, S. McSheehy, S. Gadea-Omelchenko, C. Bruggeman, A. Steinfeld, H. Yanco, Effects of changing reliability on trust of robot systems, in Proceedings of the Conference on Human-Robot Interaction (HRI), Boston (ACM/IEEE, 2012), pp. 73–80Google Scholar
  21. 21.
    C. Dixon, M. Webster, J. Saunders, M. Fisher, K. Dautenhahn, The fridge door is open – temporal verification of a robotic assistant’s behaviours, in Advances in Autonomous Robotics Systems. TAROS 2014, ed. by M. Mistry, A. Leonardis, M. Witkowski, C. Melhuish. Lecture Notes in Computer Science, vol. 8717 (Springer, Cham, 2014)Google Scholar
  22. 22.
    K. Eder, C. Harper, U. Leonards, Towards the safety of human-in-the-loop robotics: challenges and opportunities for safety assurance of robotic coworkers’, in The 23rd IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), Edinburgh, 2014, pp. 660–665.  https://doi.org/10.1109/ROMAN.2014.6926328
  23. 23.
    K. Eder, C. Harper, E. Magid, A. Pipe, Moving towards safety assurance for autonomous robotic assistants. Space Saf. Mag. (2014). http://www.spacesafetymagazine.com/aerospace-engineering/robotics/moving-towards-safety-assurance-autonomous-robotic-assistants/
  24. 24.
    F. Ensan, E. Bagheri, D. Gas̆ević, Evolutionary search-based test generation for software product line feature models, in Advanced Information Systems Engineering (CAiSE), 2012, pp. 613–628Google Scholar
  25. 25.
    G. Fainekos, H. Kress-Gazit, G. Pappas, Hybrid controllers for path planning: a temporal logic approach, in Proceedings of the 44th IEEE Conference on Decision and Control and the European Control Conference (CDCECC), Seville, 12–15 Dec 2005, pp. 4885–4890Google Scholar
  26. 26.
    M. Fisher. An Introduction to Practical Formal Methods Using Temporal Logic (Wiley, Chichester, 2011)CrossRefGoogle Scholar
  27. 27.
    M. Fisher, L. Dennis, M. Webster, Verifying autonomous systems. Commun. ACM 56(9), 84–93 (2013).  https://doi.org/10.1145/2494558
  28. 28.
    E.C. Grigore, K. Eder, A.G. Pipe, C. Melhuish, U. Leonards, Joint action understanding improves robot-to-human object handover, in 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, Tokyo, 2013, pp. 4622–4629.  https://doi.org/10.1109/IROS.2013.6697021
  29. 29.
    A. Hamacher, N. Bianchi-Berthouze, A.G. Pipe, K. Eder, Believing in BERT: using expressive communication to enhance trust and counteract operational error in physical Human-robot interaction, in 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), New York, 2016, pp. 493–500Google Scholar
  30. 30.
    P.A. Hancock, D.R. Billings, K.E. Schaefer, J.Y.C. Chen, E.J. de Visser, R. Parasuraman, A meta-analysis of factors affecting trust in human-robot interaction. Hum. Factors 53(5), 517–527 (2011)CrossRefGoogle Scholar
  31. 31.
    S.G. Khan, S. Bendoukha, M. Mahyuddin, Dynamic control for human-humanoid interaction, in Section: Human-Humanoid Interaction (HHI), Humanoid Robotics: A Reference (Springer, London, 2017)Google Scholar
  32. 32.
    J. Kim, J.M. Esposito, R. Kumar, Sampling-based algorithm for testing and validating robot controllers. Int. J. Robot. Res. 25(12), 1257–1272 (2006)CrossRefGoogle Scholar
  33. 33.
    D. Lyons, R. Arkin, T.-M. Liu, S. Jiang, P. Nirmal, Verifying performance for autonomous robot missions with uncertainty, in IFAC Proceedings Volumes, vol. 46, 2013, pp. 179–186CrossRefGoogle Scholar
  34. 34.
    B. Matthias, T. Reisinger, Example application of ISO/TS 15066 to a collaborative assembly scenario, in Proceedings of the ISR, 2016, pp. 88–92Google Scholar
  35. 35.
    B. Miller, D. Feil-Seifer, Embodiment, situatedness and morphology for humanoid interaction, in Section: Human-Humanoid Interaction (HHI), Humanoid Robotics: A Reference (Springer, London, 2017)Google Scholar
  36. 36.
    J. Morse, D. Araiza-Illan, K. Eder, J. Lawry, A. Richards, A fuzzy approach to qualification in design exploration for autonomous robots and systems, in 2017 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), Naples, 2017, pp. 1–6.  https://doi.org/10.1109/FUZZ-IEEE.2017.8015456
  37. 37.
    R.R. Murphy, D. Schreckenghost, Survey of metrics for human-robot interaction, in Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction (HRI), Tokyo (IEEE, 2013), pp. 197–198Google Scholar
  38. 38.
    J. Ouaknine, J. Worrell, On the decidability and complexity of metric temporal logic over finite words. Log. Methods Comput. Sci. 3(1:8), 1–27 (2007)Google Scholar
  39. 39.
    S. Petters, D. Thomas, M. Friedmann, O. von Stryk, Multilevel testing of control software for teams of autonomous mobile robots, in Simulation, Modeling, and Programming for Autonomous Robots. SIMPAR 2008, ed. by S. Carpin, I. Noda, E. Pagello, M. Reggiani, O. von Stryk. Lecture Notes in Computer Science, vol 5325 (Springer, Berlin/Heidelberg, 2008).  https://doi.org/10.1007/978-3-540-89076-8_20CrossRefGoogle Scholar
  40. 40.
    A. Pizialli, Functional Verification Coverage Measurement and Analysis (Springer, Boston, 2008)Google Scholar
  41. 41.
    Public attitudes towards robots. Special Eurobarometer 382 by TNS Opinion & Social at the request of Directorate-General for Information Society and Media (INSFO), 2012Google Scholar
  42. 42.
    E.J. Rapos, Co-evolution of model-based tests for industrial automotive software, in 2015 IEEE 8th International Conference on Software Testing, Verification and Validation (ICST), Graz, 2015, pp. 1–2.  https://doi.org/10.1109/ICST.2015.7102613
  43. 43.
    P. Robinette, A.M. Howard, A.R. Wagner, Effect of robot performance on human–robot trust in time-critical situations. IEEE Trans. Human-Mach. Syst. 47(4), 425–436 (2017)  https://doi.org/10.1109/THMS.2017.2648849CrossRefGoogle Scholar
  44. 44.
    M. Salem, F. Eyssel, K. Rohlfing, S. Kopp, F. Joublin, Effects of gesture on the perception of psychological anthropomorphism: a case study with a humanoid robot, in Proceedings of the International Conference on Social Robotics (ICSR), Amsterdam (Springer, 2011), pp. 31–41CrossRefGoogle Scholar
  45. 45.
    M. Salem, F. Eyssel, K. Rohlfing, S. Kopp, F. Joublin. To err is human(-like): effects of robot gesture on perceived anthropomorphism and likability. Int. J. Soc. Robot. 5(3), 313–323 (2013)CrossRefGoogle Scholar
  46. 46.
    M. Salem, G. Lakatos, F. Amirabdollahian, K. Dautenhahn, Would you trust a (faulty) robot? Effects of error, task type and personality on human-robot cooperation and trust, in Proceedings of the Conference on Human-Robot Interaction (HRI), Portland. (ACM/IEEE, 2015), pp. 141–148Google Scholar
  47. 47.
    M. Utting, B. Legeard, Practical Model-Based Testing: A Tools Approach (Morgan Kaufmann Publishers, San Francisco, 2007)Google Scholar
  48. 48.
    M. Webster, C. Dixon, M. Fisher, Safe and trustworthy autonomous robotic assistants. Space Saf. Mag. 9, 7–10 (2014)Google Scholar
  49. 49.
    M. Webster, C. Dixon, M. Fisher, M. Salem, J. Saunders, K. Koay, K. Dautenhahn, Formal verification of an autonomous personal robotic assistant, in AAAI Spring Symposium Series, 2014. Available at: https://www.aaai.org/ocs/index.php/SSS/SSS14/paper/view/7734. Accessed 24 Apr 2018
  50. 50.
    M. Webster, C. Dixon, M. Fisher, M. Salem, J. Saunders, K.L. Koay, K. Dautenhahn, J. Saez-Pons, Toward reliable autonomous robotic assistants through formal verification: a case study. IEEE Trans. Hum.-Mach. Syst. 46(2), 186–196 (2016)CrossRefGoogle Scholar

Copyright information

© Springer Nature B.V. 2019

Authors and Affiliations

  1. 1.Department of Computer ScienceUniversity of BristolBristolUK

Personalised recommendations