Skip to main content

Is It My Looks? Or Something I Said? The Impact of Explanations, Embodiment, and Expectations on Trust and Performance in Human-Robot Teams

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 10809))

Abstract

Trust is critical to the success of human-robot interaction. Research has shown that people will more accurately trust a robot if they have an accurate understanding of its decision-making process. The Partially Observable Markov Decision Process (POMDP) is one such decision-making process, but its quantitative reasoning is typically opaque to people. This lack of transparency is exacerbated when a robot can learn, making its decision making better, but also less predictable. Recent research has shown promise in calibrating human-robot trust by automatically generating explanations of POMDP-based decisions. In this work, we explore factors that can potentially interact with such explanations in influencing human decision-making in human-robot teams. We focus on explanations with quantitative expressions of uncertainty and experiment with common design factors of a robot: its embodiment and its communication strategy in case of an error. Results help us identify valuable properties and dynamics of the human-robot trust relationship.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Lewis, M., Sycara, K., Walker, P.: The role of trust in human-robot interaction. In: Abbass, H.A., Scholz, J., Reid, D.J. (eds.) Foundations of Trusted Autonomy. SSDC, vol. 117, pp. 135–159. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-64816-3_8

    Chapter  Google Scholar 

  2. Parasuraman, R., Riley, V.: Humans and automation: use, misuse, disuse, abuse. Hum. Factors 39(2), 230–253 (1997)

    Article  Google Scholar 

  3. Lee, J.D., See, K.A.: Trust in automation: designing for appropriate reliance. Hum. Factors 46(1), 50–80 (2004)

    Article  Google Scholar 

  4. Lee, J., Moray, N.: Trust, control strategies and allocation of function in human-machine systems. Ergonomics 35(10), 1243–1270 (1992)

    Article  Google Scholar 

  5. Kaelbling, L.P., Littman, M.L., Cassandra, A.R.: Planning and acting in partially observable stochastic domains. Artif. Intell. 101(1), 99–134 (1998)

    Article  MathSciNet  Google Scholar 

  6. Wang, N., Pynadath, D.V., Hill, S.G.: The impact of POMDP-generated explanations on trust and performance in human-robot teams. In: International Conference on Autonomous Agents and Multiagent Systems (2016)

    Google Scholar 

  7. Schweitzer, M.E., Hershey, J.C., Bradlow, E.T.: Promises and lies: restoring violated trust. Organ. Behav. Hum. Decis. Process. 101(1), 1–19 (2006)

    Article  Google Scholar 

  8. Walters, M.L., Koay, K.L., Syrdal, D.S., Dautenhahn, K., Boekhorst, R.T.: Preferences and perceptions of robot appearance and embodiment in human-robot interaction trials. In: AISB Symposium on New Frontiers in Human-Robot Interaction Convention, pp. 136–143 (2009)

    Google Scholar 

  9. Bruemmer, D.J., Marble, J.L., Dudenhoeffer, D.D.: Mutual initiative in human-machine teams. In: IEEE Conference on Human Factors and Power Plants, pp. 7-22–7-30. IEEE (2002)

    Google Scholar 

  10. Billings, D.R., Schaefer, K.E., Chen, J.Y., Kocsis, V., Barrera, M., Cook, J., Ferrer, M., Hancock, P.A.: Human-animal trust as an analog for human-robot trust: a review of current evidence. Technical Report ARL-TR-5949, Army Research Laboratory (2012)

    Google Scholar 

  11. Kerepesi, A., Kubinyi, E., Jonsson, G., Magnusson, M., Miklosi, A.: Behavioural comparison of human-animal (dog) and human-robot (AIBO) interactions. Behav. Process. 73(1), 92–99 (2006)

    Article  Google Scholar 

  12. Melson, G.F., Kahn, P.H., Beck, A., Friedman, B., Roberts, T., Garrett, E., Gill, B.T.: Children’s behavior toward and understanding of robotic and living dogs. J. Appl. Dev. Psychol. 30(2), 92–102 (2009)

    Article  Google Scholar 

  13. Dzindolet, M.T., Peterson, S.A., Pomranky, R.A., Pierce, L.G., Beck, H.P.: The role of trust in automation reliance. Int. J. Hum.-Comput. Stud. 58(6), 697–718 (2003)

    Article  Google Scholar 

  14. Swartout, W.R., Moore, J.D.: Explanation in second generation expert systems. In: David, J.M., Krivine, J.P., Simmons, R. (eds.) Second Generation Expert Systems, pp. 543–585. Springer, Heidelberg (1993). https://doi.org/10.1007/978-3-642-77927-5_24

    Chapter  Google Scholar 

  15. Elizalde, F., Sucar, L.E., Luque, M., Diez, J., Reyes, A.: Policy explanation in factored Markov decision processes. In: European Workshop on Probabilistic Graphical Models, pp. 97–104 (2008)

    Google Scholar 

  16. Visschers, V.H.M., Meertens, R.M., Passchier, W.W.F., De Vries, N.N.K.: Probability information in risk communication: a review of the research literature. Risk Anal. 29(2), 267–287 (2009)

    Article  Google Scholar 

  17. Hendrickx, L., Vlek, C., Oppewal, H.: Relative importance of scenario information and frequency information in the judgment of risk. Acta Psychol. 72(1), 41–63 (1989)

    Article  Google Scholar 

  18. Waters, E.A., Weinstein, N.D., Colditz, G.A., Emmons, K.: Formats for improving risk communication in medical tradeoff decisions. J. Health Commun. 11(2), 167–182 (2006)

    Article  Google Scholar 

  19. Matarić, M.J.: Reinforcement learning in the multi-robot domain. Auton. Robots 4(1), 73–83 (1997)

    Article  Google Scholar 

  20. Smart, W.D., Kaelbling, L.P.: Effective reinforcement learning for mobile robots. In: IEEE International Conference on Robotics and Automation, vol. 4, pp. 3404–3410. IEEE (2002)

    Google Scholar 

  21. Lewicki, R.J.: Trust, trust development, and trust repair. In: Deutsch, M., Coleman, P.T., Marcus, E.C. (eds.) The Handbook of Conflict Resolution: Theory and Practice, pp. 92–119. Wiley Publishing (2006)

    Google Scholar 

  22. Robinette, P., Howard, A.M., Wagner, A.R.: Timing is key for robot trust repair. Social Robotics. LNCS (LNAI), vol. 9388, pp. 574–583. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-25554-5_57

    Chapter  Google Scholar 

  23. Wang, N., Pynadath, D.V., Hill, S.G.: Trust calibration within a human-robot team: comparing automatically generated explanations. In: The Eleventh ACM/IEEE International Conference on Human Robot Interaction, Piscataway, NJ, USA, pp. 109–116. IEEE Press (2016)

    Google Scholar 

  24. Wang, N., Pynadath, D.V., Hill, S.G.: Building trust in a human-robot team. In: Interservice/Industry Training, Simulation and Education Conference (2015)

    Google Scholar 

  25. Rovira, E., Cross, A., Leitch, E., Bonaceto, C.: Displaying contextual information reduces the costs of imperfect decision automation in rapid retasking of ISR assets. Hum. Factors 56(6), 1036–1049 (2014)

    Article  Google Scholar 

  26. Wickens, C.D., Dixon, S.R.: The benefits of imperfect diagnostic automation: a synthesis of the literature. Theor. Issues Ergon. Sci. 8(3), 201–212 (2007)

    Article  Google Scholar 

  27. Pop, V.L., Shrewsbury, A., Durso, F.T.: Individual differences in the calibration of trust in automation. Hum. Factors 57(4), 545–556 (2015)

    Article  Google Scholar 

  28. Mayer, R.C., Davis, J.H., Schoorman, F.D.: An integrative model of organizational trust. Acad. Manag. Rev. 20(3), 709–734 (1995)

    Article  Google Scholar 

  29. McKnight, D.H., Choudhury, V., Kacmar, C.: Developing and validating trust measures for e-commerce: an integrative typology. Inf. Syst. Res. 13(3), 334–359 (2002)

    Article  Google Scholar 

  30. McShane, S.L.: Propensity to trust scale (2014)

    Google Scholar 

  31. Ross, J.M.: Moderators of Trust and Reliance Across Multiple Decision AIDS. ProQuest, Ann Arbor (2008)

    Google Scholar 

  32. Syrdal, D.S., Dautenhahn, K., Koay, K.L., Walters, M.L.: The negative attitudes towards robots scale and reactions to robot behaviour in a live human-robot interaction study. In: Adaptive and Emergent Behaviour and Complex Systems (2009)

    Google Scholar 

  33. Greco, V., Roger, D.: Coping with uncertainty: the construction and validation of a new measure. Pers. Individ. Differ. 31(4), 519–534 (2001)

    Article  Google Scholar 

  34. Hart, S.G., Staveland, L.E.: Development of NASA-TLX (task load index): results of empirical and theoretical research. Adv. Psychol. 52, 139–183 (1988)

    Article  Google Scholar 

  35. Taylor, R.M.: Situational awareness rating technique (SART): the development of a tool for aircrew systems design. In: Situational Awareness in Aerospace Operations (1990)

    Google Scholar 

  36. Mayer, R.C., Davis, J.H.: The effect of the performance appraisal system on trust for management: a field quasi-experiment. J. Appl. Psychol. 84(1), 123 (1999)

    Article  Google Scholar 

  37. Kaelbling, L.P., Littman, M.L., Moore, A.W.: Reinforcement learning: a survey. J. Artif. Intell. Res. 4, 237–285 (1996)

    Google Scholar 

Download references

Acknowledgment

This project is funded by the U.S. Army Research Laboratory. Statements and opinions expressed do not necessarily reflect the position or the policy of the United States Government, and no official endorsement should be inferred.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ning Wang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG, part of Springer Nature

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wang, N., Pynadath, D.V., Rovira, E., Barnes, M.J., Hill, S.G. (2018). Is It My Looks? Or Something I Said? The Impact of Explanations, Embodiment, and Expectations on Trust and Performance in Human-Robot Teams. In: Ham, J., Karapanos, E., Morita, P., Burns, C. (eds) Persuasive Technology. PERSUASIVE 2018. Lecture Notes in Computer Science(), vol 10809. Springer, Cham. https://doi.org/10.1007/978-3-319-78978-1_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-78978-1_5

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-78977-4

  • Online ISBN: 978-3-319-78978-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics