Advertisement

Trust in Human-Autonomy Teaming: A Review of Trust Research from the US Army Research Laboratory Robotics Collaborative Technology Alliance

Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 784)

Abstract

Trust is paramount to the development of effective human-robot teaming. It becomes even more important as robotic systems evolve to make both independent and interdependent decisions in high-risk, dynamic environments. Yet, despite decades of research looking at trust in human-interpersonal teams, human-animal teams, and human-automation interaction, there are still a number of critical research gaps related to human-robot trust. The US Army Research Laboratory Robotics Collaborative Technology Alliance (RCTA) is a 10-year program with government, industry and academia combining to conduct collaborative research across four major robotic technical areas of intelligence, perception, human-robot interaction, and manipulation and mobility. This paper describes findings from over 60 publications and 49 presentations describing research conducted as part of the RCTA from 2010 to 2017 to address these critical gaps on human-robot trust.

Keywords

Human-robot interaction Teaming Trust 

Notes

Acknowledgments

The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Laboratory or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein.

References

  1. 1.
    Goodrich, M.A., Schultz, A.C.: Human-robot interaction: a survey. Found. Trends Hum. Comput. Interact. 1(3), 203–275 (2007)CrossRefGoogle Scholar
  2. 2.
    Scholtz, J.: Theory and evaluation of human robot interactions. In: Hawaii International Conference on System Sciences, pp. 10–19. IEEE, Big Island, HI (2003)Google Scholar
  3. 3.
    Adams, B.D., Bruyn, L.E., Houde, S., Angelopoulos, P.: Trust in automated systems literature review (Report No. CR-2003-096). Department of National Defense, Toronto, Ontario, Canada (2003)Google Scholar
  4. 4.
    Billings, D.R., Schaefer, K.E., Llorens, N., Hancock, P.A.: What is trust? defining the construct across domains. In: American Psychological Association Conference, Division 21, Orlando, FL (2012)Google Scholar
  5. 5.
    Schaefer, K.E.: The perception and measurement of human-robot trust. Electronic Theses and Dissertations, 2688. University of Central Florida, FL (2013)Google Scholar
  6. 6.
    Phillips, E., Ososky, S., Jentsch, F.: An investigation of human decision-making in a human-robot team task. Hum. Factors Ergon. Soc. Ann. Meet. 58(1), 315–319 (2014)CrossRefGoogle Scholar
  7. 7.
    Hancock, P.A., Billings, D.R., Schaefer, K.E.: Can you trust your robot? Ergon. Des. 19(3), 24–29 (2011)Google Scholar
  8. 8.
    Hancock, P.A., Billings, D.R., Schaefer, K.E., Chen, J.Y.C., Parasuraman, R., de Visser, E.: A meta-analysis of factors affecting trust in human-robot interaction. Hum. Factors 53(5), 517–527 (2011)CrossRefGoogle Scholar
  9. 9.
    Schaefer, K.E., Chen, J.Y.C., Szalma, J.L., Hancock, P.A.: A meta-analysis of factors influencing the development of trust in automation: implications for understanding autonomy in future systems. Hum. Factors 58(3), 377–400 (2016)CrossRefGoogle Scholar
  10. 10.
    Norman, D.A.: Some observations on mental models. In: Gentner, D., Stevens, A.L. (eds.) Mental Models, pp. 7–14. Lawrence Earlbaum Associates Inc., Hillsdale, NJ (1983)Google Scholar
  11. 11.
    Phillips, E., Ososky, S., Grove, J., Jentsch, F.: From tools to teammates: toward the development of appropriate mental models for intelligent robots. Hum. Factors Ergon. Soc. Ann. Meet. 55(1), 1491–1495 (2011)CrossRefGoogle Scholar
  12. 12.
    Carroll, J.M., Thomas, J.C.: Metaphor and the cognitive representation of computing systems. IEEE Trans. Syst. Man Cybern. 12(2), 107–116 (1982)CrossRefGoogle Scholar
  13. 13.
    Schaefer, K.E., Sanders, T.L., Yordon, R.E., Billings, D.R., Hancock, P.A.: Classification of robot form: Factors predicting perceived trustworthiness. Hum. Factors Ergon. Soc. Ann. Meet. 56, 1548–1552 (2012)CrossRefGoogle Scholar
  14. 14.
    Schaefer, K.E., Billings, D.R., Hancock, P.A.: Robots vs. machines: identifying user perceptions and classifications. In: Cognitive Methods in Situation Awareness and Decision Support (CogSIMA), pp. 138–141. IEEE, New Orleans, LA (2012)Google Scholar
  15. 15.
    Warta, S.F.: If a Robot did “the robot”, would it still be called “the robot” or just dancing? Perceptual and social factors in human-robot interactions. Hum. Factors Ergon. Soc. Ann. Meet. 59(1), 796–800 (2015)CrossRefGoogle Scholar
  16. 16.
    Schaefer, K.E., Adams, J.K., Cook, J.G., Bardwell-Owens, A., Hancock, P.A.: The future of robotic design: trends from the history of media representations. Ergon. Des. 23(1), 13–19 (2015)Google Scholar
  17. 17.
    Ososky, S., Phillips, E., Schuster, D., Jentsch, F.: A picture is worth a thousand mental models. Hum. Factors Ergon. Soc. Ann. Meet. 57(1), 1298–1302 (2013)CrossRefGoogle Scholar
  18. 18.
    Perelman, B.S., Evans III, A.W., Schaefer, K.E.: Mental model consensus and shifts during navigation system-assisted route planning. Hum. Factors Ergon. Soc. Ann. Meet. 61(1), 1183–1187 (2017)CrossRefGoogle Scholar
  19. 19.
    Cannon-Bowers, J.A., Salas, E., Converse, S.: Shared mental models in expert team decision making. In: Castellan, N.J. (ed.) Current Issues in Individual and Group Decision Making, pp. 221–246. Erlbaum, Hillsdale, NJ (1993)Google Scholar
  20. 20.
    Mathieu, J.E., Heffner, T.S., Goodwin, G.F., Salas, E., Cannon-Bowers, J.: Influence of shared mental models on team process and performance. J. Appl. Psychol. 85(2), 273–283 (2000)CrossRefGoogle Scholar
  21. 21.
    Ososky, S., Schuster, D., Jentsch, F., et al.: The importance of shared mental models and shared situation awareness for transforming robots from tools to teammates. In: XIV SPIE Unmanned Systems Technology (2012)Google Scholar
  22. 22.
    Endsley, M.R.: The application of human factors to the development of expert systems for advanced cockpits. Hum. Factors Ergon. Soc. Ann. Meet. 13, 1388–1392 (1987)CrossRefGoogle Scholar
  23. 23.
    Endsley, M.R.: From here to autonomy: lessons learned from human-automation research. Hum. Factors 59(1), 5–27 (2017)CrossRefGoogle Scholar
  24. 24.
    Endsley, M.R.: Autonomous horizons: system autonomy in the air force – a path to the future (AF/ST TR 15-01), volume 1 Human-Autonomy Teaming. Department of the Air Force, Washington, DC (2017)Google Scholar
  25. 25.
    Schuster, D.A.: The effects of diagnostic aiding on situation awareness under robot unreliability. Electronic Theses and Dissertations. University of Central Florida, FL (2013)Google Scholar
  26. 26.
    Endsley,M.R., Jones, W.M.: Situation awareness information dominance & information warfare (AL/CF-TR-1997-0156). US Air Force Armstrong Laboratory, Wright-Patterson AFB, OH (1997)Google Scholar
  27. 27.
    Schaefer, K.E., Chen, J.Y.C., Wright, J., Aksaray, D., Roy, N.: Challenges with incorporating context into human-robot teaming (TR-SS-17-03). In: AAAI Spring Symposium Series, pp. 347–350. AAAI Publications, Stanford, CA (2017)Google Scholar
  28. 28.
    Schaefer, K.E., Aksaray, D., Wright, J.L., Chen, J.Y.C., Roy, N.: Challenges with addressing the issue of context within AI and human-robot teaming. In: Lawless, W., Mittu, R., Sofge, D. (eds.) Computational Context: The Value, Theory and Application of Context with AI. Springer (In Press)Google Scholar
  29. 29.
    Schaefer, K.E., Perelman, B.S., Brewer, R.W., Wright, J., Roy, N., Aksaray, D.: Quantifying human decision-making: implications for bidirectional communication in human-robot teams. In: Human-Computer Interaction International, Las Vegas, NV (2018)CrossRefGoogle Scholar
  30. 30.
    Kadushin, C.: Understanding Social Networks: Theories, Concepts, and Findings. Oxford University Press, New York (2012)Google Scholar
  31. 31.
    Azien, I.: The theory of planned behavior. Organ. Behav. Hum. Decis. Process. 50, 179–211 (1991)CrossRefGoogle Scholar
  32. 32.
    Schaefer, K.E., Brewer, R., Putney, J., Mottern, E., Barghout, J., Straub, E.R.: Relinquishing manual control: Collaboration requires the capability to understand robot intent. In: International Conference on Collaboration Technologies and Systems, pp. 359–366. IEEE, Orlando, FL (2016)Google Scholar
  33. 33.
    Schaefer, K.E., Straub, E.R., Chen, J.Y.C., Putney, J., Evans, A.W.: Communicating intent to develop shared situation awareness and engender trust in human-agent teams. Cogn. Syst. Res.: Special Issue on Situation Awareness in Human-Machine Interactive Systems 46, 26–39 (2017)CrossRefGoogle Scholar
  34. 34.
    Straub, E.R., Schaefer, K.E.: It takes two to tango: Automated vehicles and human beings do the dance of driving – Four social considerations for policy. Transportation Part A: Policy and Practice, Special Issue on Autonomous Vehicle Policy. Elsevier (In Press)Google Scholar
  35. 35.
    U.S. Department of the Army: Visual Signals Field Manual (FM 21-60) (1987)Google Scholar
  36. 36.
    Elliott, L.R., Hill, S.G., Barnes, M.: Gesture-based controls for robots: overview and implications for use by Soldiers (ARL-TR-7715). MDUS Army Research Laboratory, Aberdeen Proving Ground (2016)Google Scholar
  37. 37.
    Mortimer, B.J.P., Elliott, L.R.: Identifying errors in tactile displays and best practice usage guidelines. In: Chen, J.Y.C. (ed.) Advances in Human Factors in Robots and Unmanned Systems, AHFE 2017, Advances in Intelligent Systems and Computing, vol. 595, pp. 226–235. Springer, Cham (2017)Google Scholar
  38. 38.
    Barber, D., Wohleber, R.W., Parchment, A., Jentsch, F., Elliott, L.: Development of a squad level vocabulary for human-robot interaction. In: Shumaker, R., Lackey, S. (eds.) Virtual, Augmented and Mixed Reality Designing and Developing Virtual and Augmented Environments, pp. 139–148. Springer, Cham (2014)Google Scholar
  39. 39.
    Duvallet, F., Walter, M.R., Howard, T., Hemachandra, S., Oh, J., Teller, S., Roy, N., Stentz, A.: Inferring maps and behaviors from natural language instructions. In: Experimental Robotics, pp. 373–388 (2016)Google Scholar
  40. 40.
    Barber, D., Abich IV, J., Phillips, E., Talone, A., Jentsch, F., Hill, S.: Field assessment of multimodal communication for dismounted human-robot teams. Hum. Factors Ergon. Soc. Ann. Meet. 59(1), 921–925 (2015)CrossRefGoogle Scholar
  41. 41.
    Barber, D.J., Howard, T.M., and Walter, M.R.: A multimodal interface for real-time soldier-robot teaming. In: SPIE International Society for Optics and Photonics, Baltimore MD (2016)Google Scholar
  42. 42.
    Chen, J.Y.C., Procci, K., Boyce, M., Wright, J., Garcia, A., Barnes, M.: Situation awareness-based agent transparency (ARL-TR-6905). MDUS Army Research Laboratory, Aberdeen Proving Grounds (2014)Google Scholar
  43. 43.
    Sanders, T.L., Wixon, T., Schafer, K.E., Chen, J.Y.C., Hancock, P.A.: The influence of modality and transparency on trust in human-robot interaction. In: Cognitive Methods in Situation Awareness and Decision Support (CogSIMA), pp. 156–159. IEEE (2014)Google Scholar
  44. 44.
    Wright, J.L., Chen, J.Y.C., Barnes, M., Hancock, P.A.: Agent reasoning transparency: The influence of information level on automation-induced complacency (ARL-TR-8044). MDUS Army Research Laboratory, Aberdeen Proving Ground (2017)Google Scholar
  45. 45.
    Wiltshire, T.J., Barber, D., Fiore, S.M.: Towards modeling social-cognitive mechanisms in robots to facilitate human-robot teaming. Hum. Factors Ergon. Soc. Ann. Meet. 57(1), 1278–1282 (2013)CrossRefGoogle Scholar
  46. 46.
    Wiltshire, T.J., Fiore, S.M.: Social cognitive and affective neuroscience in human–machine systems: A roadmap for improving training, human–robot interaction, and team performance. IEEE Trans. Hum. Mach. Syst. 44(6), 779–787 (2014)CrossRefGoogle Scholar
  47. 47.
    Fiore, S.M., Wiltshire, T.J., Lobato, E.J., Jentsch, F.G., Huang, W.H., Axelrod, B.: Toward understanding social cues and signals in human–robot interaction: effects of robot gaze and proxemic behavior. Front. Psychol. 4(859), 1–15 (2013)Google Scholar
  48. 48.
    MacArthur, K.R., Stowers, K., Hancock, P.A.: Human-robot interaction: Proximity and speed – Slowly back away from the robot. In: Savage-Knepshield, P., Chen, J.Y.C. (eds.) Advances in Human Factors in Robots and Unmanned Systems. Advances in Intelligent Systems and Computing, vol. 499, pp. 365–374. Springer, Cham (2017)CrossRefGoogle Scholar
  49. 49.
    Wiltshire, T.J., Lobato, E.J., Garcia, D.R., Fiore, S.M., Jentsch, F.G., Huang, W.H., Axelrod, B.: Effects of robotic social cues on interpersonal attributions and assessments of robot interaction behaviors. Hum. Factors Ergon. Soc. Ann. Meet. 59(1), 801–805 (2015)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature (outside the USA) 2019

Authors and Affiliations

  1. 1.United States Army Research LaboratoryAberdeenUSA
  2. 2.University of Central FloridaOrlandoUSA

Personalised recommendations