Skip to main content

Measuring Trust in Human Robot Interactions: Development of the “Trust Perception Scale-HRI

  • Chapter
  • First Online:
Robust Intelligence and Trust in Autonomous Systems

Abstract

As robots penetrate further into the everyday environments, trust in these robots becomes a crucial issue. The purpose of this work was to create and validate a reliable scale that could measure changes in an individual’s trust in a robot. Assessment of current trust theory identified measurable antecedents specific to the human, the robot, and the environment. Six experiments subsumed the development of the 40 item trust scale. Scale development included the creation of a 156 item pool. Two experiments identified the robot features and perceived functional characteristics that were related to the classification of a machine as a robot for this item pool. Item pool reduction techniques and subject matter expert (SME) content validation were used to reduce the scale to 42 items. The two final experiments were then conducted to validate the scale. The finalized 40 item pre-post interaction trust scale was designed to measure trust perceptions specific to human-robot interaction. The scale measures trust on a 0–100 % rating scale and provides a percentage trust score. A 14 item sub-scale of this final version of the test recommended by SMEs may be sufficient for some HRI tasks, and the implications of this proposition are discussed.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  • Army Research Laboratory (2012) Robotics Collaborative Technology Alliance Annual Program Plan. U.S. Army Research Laboratory, Aberdeen Proving Ground

    Google Scholar 

  • Bartneck C, Kulić D, Croft E, Zoghbi S (2009) Measurement Instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. Int J Soc Robots 1:71–81. doi:10.1007/s12369-008-001-3

    Article  Google Scholar 

  • Beer JM, Fisk AD, Rogers WA (2014) Toward a framework for levels of robot autonomy in human-robot interaction. J Hum Robot Interact 3(2):74–99. doi:10.5898/JHRI.3.2.Beer

    Article  Google Scholar 

  • Bloomqvist K (1997) The many faces of trust. Scand J Manage 13(3):271–286

    Article  Google Scholar 

  • Campbell DT, Fiske DW (1959) Convergent and discriminant validation by the multitrait-multimethod matrix. Psychol Bull 56:81–105

    Article  Google Scholar 

  • Chen JYC, Terrence PI (2009) Effects of imperfect automation and individual differences on concurrent performance of military and robotics tasks in a simulated multi-tasking environment. Ergonomics 52(8):907–920. doi:10.1080/00140130802680773

    Article  Google Scholar 

  • Desai M, Stubbs K, Steinfeld A, Yanco H (2009) Creating trustworthy robots: lessons and inspirations from automated systems. In: Proceedings of the AISB convention: new frontiers in human-robot interaction, Edinburgh. Retrieved from https://www.ri.cmu.edu/pub_files/2009/4/Desai_paper.pdf

  • DeVellis RF (2003) Scale development theory and applications, vol 26, 2nd edn, Applied social research methods series. Sage, Thousand Oaks

    Google Scholar 

  • Donnellan MB, Oswald FL, Baird BM, Lucas RE (2006) The Mini-IPIP scales: tiny-yet-effective measures of the Big Five factors of personality. Psychol Assess 18(2):192–203. doi:10.1037/1040-3590.18.2.192

    Article  Google Scholar 

  • Fink A (2009) How to conduct surveys: a step-by-step guide, 4th edn. Sage, Thousand Oaks

    Google Scholar 

  • Gonzalez JP, Dodson W, Dean R, Kreafle G, Lacaze A, Sapronov L, Childers M (2009) Using RIVET for parametric analysis of robotic systems. In: Proceedings of 2009 ground vehicle systems engineering and technology symposium (GVSETS), Dearborn

    Google Scholar 

  • Groom V, Nass C (2007) Can robots be teammates? Benchmarks in human-robot teams. Interact Stud 8(3):483–500. doi:10.1075/is.8.3.10gro

    Article  Google Scholar 

  • Hancock PA, Billings DR, Schaefer KE, Chen JYC, Parasuraman R, de Visser E (2011) A meta-analysis of factors affecting trust in human-robot interaction. Hum Factors 53(5):517–527. doi:10.1177/0018720811417254

    Article  Google Scholar 

  • Jian J-Y, Bisantz AM, Drury CG, Llinas J (1998) Foundations for an empirically determined scale of trust in automated systems (report no. AFRL-HE-WP-TR-2000-0102). Air Force Research Laboratory, Wright-Patterson AFB

    Google Scholar 

  • Lawshe CH (1975) A quantitative approach to content validity. Pers Psychol 24(4):563–575. doi:10.1111/j.1744-6570.1975.tb01393.x

    Article  Google Scholar 

  • Lee KM, Park N, Song H (2005) Can a robot be perceived as a developing creature? Effects of a robot’s long-term cognitive developments on its social presence and people’s social responses toward it. Hum Commun Res 31(4):538–563. doi:10.1111/j.1468-2958.2005.tb00882.x

    MathSciNet  Google Scholar 

  • Lewicki RJ, McAllister DJ, Bies RJ (1998) Trust and distrust: new relationships and realities. Acad Manage Rev 23(3):438–458. doi:10.5465/AMR.1998.926620

    Google Scholar 

  • Lussier B, Gallien M, Guiochet J (2007) Fault tolerant planning for critical robots. In: Proceedings of the 37th annual IEEE/IFIP international conference on dependable systems and networks, pp 144–153. doi:10.1109/DSN.2007.50

  • Marshall P (2014) Army tests driverless vehicles in ‘living lab.’ GCN technology, tools, and tactics for public sector IT. Retrieved from http://gcn.com/Articles/2014/07/16/ARIBO-Army-TARDEC.aspx?Page=1

  • Matthews G, Joyner L, Gilliland K, Campbell SE, Falconer S, Huggins J (1999) Validation of a comprehensive stress state questionnaire: towards a state “Big Three”. In: Mervielde I, Dreary IJ, DeFruyt F, Ostendorf F (eds) Personality psychology in Europe, vol 7. Tilburg University Press, Tilburg, pp 335–350

    Google Scholar 

  • McAllister DJ (1995) Affect- and cognition-based trust as foundations for interpersonal cooperation in organizations. Acad Manage J 38(1):24–59

    Article  MathSciNet  Google Scholar 

  • McKnight DH, Kacmar CJ, Choudhury V (2004) Dispositional trust and distrust distinctions in predicting high- and low-risk internet expert advice site perceptions. e-Service J 3(2):35–58. Retrieved from http://www.jstor.org/stable/10.2979/ESJ.2004.3.2.35

    Google Scholar 

  • Merritt SM, Ilgen DR (2008) Not all trust is created equal: dispositional and history-based trust in human-automation interactions. Hum Factors 50(2):194–210. doi:10.1518/001872008X288574

    Article  Google Scholar 

  • Monahan JL (1998) I don’t know it but I like you—the influence of non-conscious affect on person perception. Hum Commun Res 24(4):480–500. doi:10.1111/j.1468-2958.1998.tb00428.x

    Article  Google Scholar 

  • Nomura T, Kanda T, Suzuki T, Kato K (2004) Psychology in human-robot communication: an attempt through investigation of negative attitudes and anxiety toward robots. In: Proceedings of the 2004 IEEE international workshop on robot and human interactive communication, Kurashiki, Okayama, pp 35–40. doi:10.1109/ROMAN.2004.1374726

  • Powers A, Kiesler S (2006) The advisor robot: tracing people’s mental model from a robot’s physical attributes. In: 1st ACM SIGCHI/SIGART conference on Human-robot interaction, Salt Lake City, Utah, USA

    Google Scholar 

  • Rotter JB (1967) A new scale for the measurement of interpersonal trust. J Pers 35(4):651–665. doi:10.1111/j.1467-6494.1967.tb01454

    Article  Google Scholar 

  • Rouse WB, Morris NM (1986) On looking into the black box: prospects and limits in the search for mental models. Psychol Bull 100(3):349–363. doi:10.1037/00332909.100.3.349

    Article  Google Scholar 

  • Sanders TL, Wixon T, Schafer KE, Chen JYC, Hancock PA (2014) The influence of modality and transparency on trust in human-robot interaction. In: Proceedings of the fourth annual IEEE CogSIMA conference, San Antonio

    Google Scholar 

  • Schaefer KE (2013) The perception and measurement of human-robot trust. Dissertation, University of Central Florida, Orlando

    Google Scholar 

  • Schaefer KE (2015) Perspectives of trust: research at the US Army Research Laboratory. In: R Mittu, G Taylor, D Sofge, WF Lawless (Chairs) Foundations of autonomy and its (cyber) threats: from individuals to interdependence. Symposium conducted at the 2015 Association for the Advancement of Artificial Intelligence (AAAI), Stanford University, Stanford

    Google Scholar 

  • Schaefer KE, Sanders TL, Yordon RE, Billings DR, Hancock PA (2012) Classification of robot form: factors predicting perceived trustworthiness. Proc Hum Fact Ergon Soc 56:1548–1552. doi:10.1177/1071181312561308

    Article  Google Scholar 

  • Schaefer KE, Billings DR, Szalma, JL, Adams, JK, Sanders, TL, Chen JYC, Hancock PA (2014) A meta-anlaysis of factors influencing the development of trust in automation: implications for human-robot interaction (report no ARL-TR-6984). U.S. Army Research Laboratory, Aberdeen Proving Ground

    Google Scholar 

  • Schafer KE, Sanders T, Kessler TA, Wild T, Dunfee M, Hancock PA (2015) Fidelity & validity in robotic simulation. In: Proceedings of the fifth annual IEEE CogSIMA conference, Orlando

    Google Scholar 

  • Schaefer KE, Scribner D (2015) Individual differences, trust, and vehicle autonomy: A pilot study. In Proceedings of the Human Factors and Ergonomics Society 59(1):786–790. doi: 10.1177/1541931215591242

    Google Scholar 

  • Scholtz J (2003) Theory and evaluation of human robot interactions. In: Proceedings from the 36th annual Hawaii international conference on system sciences. doi:10.1109/HICSS.2003.1174284

  • Steinfeld A, Fong T, Kaber D, Lewis M, Scholtz J, Schultz A, Goodrich M (2006) Common metrics for human-robot interaction. In: Proceedings of the first ACM/IEEE international conference on human robot interaction, Salt Lake City, pp 33–40. doi:10.1145/1121241.1121249

  • Warner RM, Sugarman DB (1996) Attributes of personality based on physical appearance, speech, and handwriting. J Pers Soc Psychol 50:792–799

    Article  Google Scholar 

  • Wildman JL (2011) Cultural differences in forgiveness: fatalism, trust violations, and trust repair efforts in interpersonal collaboration. Dissertation, University of Central Florida, Orlando

    Google Scholar 

  • Wildman JL, Fiore SM, Burke CS, Salas E, Garven S (2011) Trust in swift starting action teams: critical considerations. In Stanton NA (ed) Trust in military teams. Ashgate, London, pp 71–88, 335–350

    Google Scholar 

  • Yagoda RE, Gillan DJ (2012) You want me to trust a robot? The development of a human-robot interaction trust scale. Int J Soc Robotics 4(3):235–248

    Article  Google Scholar 

Download references

Acknowledgments

This research is a continuation of the author’s dissertation work supported in part by the US Army Research Laboratory (Cooperative Agreement Number W911-10-2-0016) and in part by an appointment to the US Army Research Postdoctoral Fellowship Program administered by the Oak Ridge Associated Universities through a cooperative agreement with the US Army Research Laboratory (Cooperative Agreement Number W911-NF-12-2-0019). The views and conclusions contained in this document are those of the author and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Laboratory or the US Government. The US Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein. Special acknowledgments hereby include the author’s dissertation committee: Drs. Peter A. Hancock, John D. Lee, Florian Jentsch, Peter Kincaid, Deborah R. Billings, and Lauren Reinerman. Additional acknowledgments are made to internal technical reviewers from the US Army Research Laboratory: Dr. Susan G. Hill, Dr. Don Headley, Mr. John Lockett, Dr. Kim Drnec, and Dr. Katherine Gamble.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kristin E. Schaefer .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer Science+Business Media (outside the USA)

About this chapter

Cite this chapter

Schaefer, K.E. (2016). Measuring Trust in Human Robot Interactions: Development of the “Trust Perception Scale-HRI”. In: Mittu, R., Sofge, D., Wagner, A., Lawless, W. (eds) Robust Intelligence and Trust in Autonomous Systems. Springer, Boston, MA. https://doi.org/10.1007/978-1-4899-7668-0_10

Download citation

  • DOI: https://doi.org/10.1007/978-1-4899-7668-0_10

  • Published:

  • Publisher Name: Springer, Boston, MA

  • Print ISBN: 978-1-4899-7666-6

  • Online ISBN: 978-1-4899-7668-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics