Advertisement

A Communication Model of Human–Robot Trust Development for Inclusive Education

  • Seungcheol Austin LeeEmail author
  • Yuhua (Jake) Liang
Chapter
Part of the Perspectives on Rethinking and Reforming Education book series (PRRE)

Abstract

Integrating robots into the educational environment offers tremendous opportunities to support and augment learning. However, building trust between human users and robots can be a challenge for inclusive education, as females, minorities, and the less privileged individuals tend to report higher levels of the anticipated fear and distrust toward robots. In this chapter, we examine how communication affects human–robot trust in light of the verbal messages that humans and robots exchange. The chapter overviews the four guiding foci of human–robot trust: (1) human–robot trust is a communication-driven process; (2) human–robot trust develops over time; (3) trust optimization requires calibration to the particular situation and circumstance; and (4) trust is based on multidimensional perceptions of trustee’s trustworthiness. The chapter outlines systematic research to examine how trust is developed, calibrated, and affected by communication messages across different temporal stages in the inclusive learning environment: pre-interaction stage, entry stage, and relationship stage.

Keywords

Human–robot trust Communication Partnerships Co-roboting environment 

References

  1. Altman, I., & Taylor, D. A. (1973). Social penetration: The development of interpersonal relationships. New York, NY: Holt, Rinehart & Winston.Google Scholar
  2. Atkinson, K. (2018, October). This robot co-taught a course at West Point. Axios. Retrieved from https://www.axios.com/robot-ai-teaching-college-course-at-west-point-98ce5888-873b-4b72-8de5-0f7c592d66b0.html.
  3. Bailenson, J. N., Yee, N., Merget, D., & Schroeder, R. (2006). The effect of behavioral realism and form realism of real-time avatar faces on verbal disclosure, non-verbal disclosure, emotion recognition, and co-presence in dyadic interaction. Presence: Teleoperators and Virtual Environments, 15, 359–372.Google Scholar
  4. Banerjee, A., & Fudenberg, D. (2004). Word-of-mouth learning. Games and Economic Behavior, 46, 1–22.CrossRefGoogle Scholar
  5. Belpaeme, T., Kennedy, J., Ramachandran, A., Scassellati, B., & Tanaka, F. (2018). Social robots for education: A review. Science Robotics, 3(21), 1–9.CrossRefGoogle Scholar
  6. Berger, C. R., & Calabrese, R. J. (1975). Some explorations in initial interaction and beyond: Toward a developmental theory of interpersonal communication. Human Communication Research, 1, 99–112.CrossRefGoogle Scholar
  7. Breazeal, C. L. (2004). Designing sociable robots. Cambridge, MA: MIT press.CrossRefGoogle Scholar
  8. Bruemmer, D. J., Marble, J. L., & Dudenhoeffer, D. D. (2002). Mutual initiative in human-machine teams. In Proceedings of the IEEE 7th Conference on Human Factors and Power Plants (pp. 22–30). IEEE.Google Scholar
  9. Burgoon, J. K. (1993). Interpersonal expectations, expectancy violations, and emotional communication. Journal of Language and Social Psychology, 12, 30–48.CrossRefGoogle Scholar
  10. Burgoon, J. K., & Walther, J. B. (1990). Nonverbal expectancies and the evaluative consequences of violations. Human Communication Research, 17, 232–265.CrossRefGoogle Scholar
  11. Cho, S., Lee, S. A., & Liang, Y. (2016). Using anthropomorphic agents for persuasion. Paper presented at the 66th Annual Convention of the International Communication Association, Fukuoka, Japan.Google Scholar
  12. Colquitt, J. A., LePine, J. A., Piccolo, R. F., Zapata, C. P., & Rich, B. L. (2012). Explaining the justice–performance relationship: Trust as exchange deepener or trust as uncertainty reducer? Journal of Applied Psychology, 97, 1–15.CrossRefGoogle Scholar
  13. Edwards, A., Edwards, C., Shaver, C., & Oaks, M. (2009). Computer-mediated word-of-mouth communication on ratemyprofessors.com: Expectancy effects on student cognitive and behavioral learning. Journal of Computer-Mediated Communication, 14, 368–392.CrossRefGoogle Scholar
  14. Edwards, A., Edwards, C., Spence, P. R., Harris, C., & Gambino, A. (2016). Robots in the classroom: Differences in students’ perceptions of credibility and learning between “teacher as robot” and “robot as teacher”. Computers in Human Behavior, 65, 627–634.CrossRefGoogle Scholar
  15. Ellison, G., & Fudenberg, D. (1995). Word-of-mouth communication and social learning. The Quarterly Journal of Economics, 110, 93–125.CrossRefGoogle Scholar
  16. Gouldner, A. W. (1960). The norm of reciprocity: A preliminary statement. American Sociological Review, 25, 161–178.CrossRefGoogle Scholar
  17. Gudykunst, W. B. (2005). An anxiety/uncertainty management (AUM) theory of effective communication: Making the net of the mesh finer. In W. B. Gudykunst (Ed.), Theorizing about intercultural communication (pp. 281–322). Thousand Oaks, CA: Sage.Google Scholar
  18. Hancock, P. A., Billings, D. R., Schaefer, K. E., Chen, J. Y., De Visser, E. J., & Parasuraman, R. (2011). A meta-analysis of factors affecting trust in human-robot interaction. Human Factors: The Journal of the Human Factors and Ergonomics Society, 53, 517–527.CrossRefGoogle Scholar
  19. Jones, E. E. (1986). Interpreting interpersonal behavior: The effects of expectancies. Science, 234, 41–46.CrossRefGoogle Scholar
  20. Kahn, P. H., Kanda, T., Ishiguro, H., Gill, B. T., Ruckert, J. H., Shen, S., et al. (2012). Do people hold a humanoid robot morally accountable for the harm it causes? In Proceedings of the 7th Annual ACM/IEEE International Conference on Human-Robot Interaction (pp. 33–40). ACM.Google Scholar
  21. Kellerman, K., & Reynolds, R. (1990). When ignorance is bliss: The role of motivation to reduce uncertainty in uncertainty reduction theory. Human Communication Research, 17, 5–75.CrossRefGoogle Scholar
  22. Kramer, M. W. (1999). Motivation to reduce uncertainty. Management Communication Quarterly, 13, 305–316.CrossRefGoogle Scholar
  23. Lee, S. A., & Liang, Y. (2015). Reciprocity in computer-human interaction: Source-based, norm-based, and affect-based explanations. Cyberpsychology, Behavior, and Social Networking, 18, 234–240.CrossRefGoogle Scholar
  24. Lee, S. A., & Liang, Y. (2016). The role of reciprocity in verbally persuasive robots. Cyberpsychology, Behavior, and Social Networking, 19, 524–527.CrossRefGoogle Scholar
  25. Lee, S. A., & Liang, Y. (2018). Theorizing message strategies for persuasive robots. In A. L. Guzman (Ed.), Human-machine communication: Rethinking communication, technology, and ourselves (pp. 119–143). New York, NY: Peter Lang.Google Scholar
  26. Lee, J., & Moray, N. (1992). Trust, control strategies and allocation of function in human-machine systems. Ergonomics, 35, 1243–1270.CrossRefGoogle Scholar
  27. Liang, Y. (2015). Responses to negative student evaluations on RateMyProfessors.com: The effect of instructor statement of credibility on student lower-level cognitive learning and state motivation to learn. Communication Education, 64, 455–471.CrossRefGoogle Scholar
  28. Liang, Y., & Lee, S. A. (2016). Advancing the strategic messages affecting robot trust effect: The dynamic of user- and robot-generated content on human-robot trust and interaction outcomes. Cyberpsychology, Behavior, and Social Networking, 19, 538–544.CrossRefGoogle Scholar
  29. Liang, Y., & Lee, S. A. (2017). Fear of autonomous robots: Evidence from national representative data with probability sampling. International Journal of Social Robotics, 9, 379–384.CrossRefGoogle Scholar
  30. Liang, Y, & Walther, J. B. (2015). Computer-mediated communication. In International encyclopedia of social and behavioral sciences (2nd ed., pp. 504–509). Waltham, MA: Elsevier.Google Scholar
  31. Liang, Y., Lee, S. A., & Jang, J. (2013). Mindlessness and gaining compliance in computer-human interaction. Computers in Human Behavior, 29, 1572–1579.CrossRefGoogle Scholar
  32. Liang, Y., Bejerano, A. R., McPherson, M., Kearney, P., & Plax, T. (2015). The effects of peer and online sources of information on student course selection and impressions towards prospective teacher. Western Journal of Communication, 79, 435–455.CrossRefGoogle Scholar
  33. Manchala, D. W. (1998). Trust metrics, models and protocols for electronic commerce transactions. In Proceedings of the 18th International Conference on Distributed Computing Systems (pp. 312–321). IEEE.Google Scholar
  34. Marble, J. L., Bruemmer, D. J., Few, D. A., & Dudenhoeffer, D. D. (2004). Evaluation of supervisory vs. peer-peer interaction with human-robot teams. In Proceedings of the 37th Hawaii International Conference on System Sciences. IEEE.Google Scholar
  35. Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An integrative model of organizational trust. Academy of Management Review, 20, 709–734.CrossRefGoogle Scholar
  36. Mumm, J., & Mutlu, B. (2011). Human-robot proxemics: Physical and psychological distancing in human-robot interaction. In Proceedings of the 6th International Conference on Human-Robot Interaction (pp. 331–338). ACM.Google Scholar
  37. Nass, C., & Moon, Y. (2000). Machines and mindlessness: Social responses to computers. Journal of Social Issues, 56, 81–103.CrossRefGoogle Scholar
  38. National Science Foundation. (2016). National robotics initiative (NRI): The realization of co-robots acting in direct support of individuals and groups (Program Solicitation: 16-517). Retrieved from http://www.nsf.gov/pubs/2016/nsf16517/nsf16517.htm.
  39. Neuliep, J. W., & Grohskopf, E. L. (2000). Uncertainty reduction and communication satisfaction during initial interaction: An initial test and replication of a new axiom. Communication Reports, 13, 67–77.CrossRefGoogle Scholar
  40. Nomura, T., Kanda, T., & Suzuki, T. (2006). Experimental investigation into influence of negative attitudes toward robots on human–robot interaction. AI & Society, 20, 138–150.CrossRefGoogle Scholar
  41. Polishuk, A., & Verner, I. (2018). An elementary science class with a robot teacher. In W. Lepuschitz, M. Merdan, G. Koppensteiner, R. Balogh, & D. Obdržálek (Eds.), Robotics in education: Latest results and development (pp. 263–273). Cham, Switzerland: Springer.CrossRefGoogle Scholar
  42. Reeves, B., & Nass, C. (1996). The media equation: How people treat computers, televisions, and new media like real people and places. New York, NY: Cambridge University Press.Google Scholar
  43. Rosen, C. A. (1985). Robots and machine intelligence. In S. Y. Nof (Ed.), Handbook of industrial robotics (pp. 21–28). New York, NY: Wiley.Google Scholar
  44. Rotter, J. B. (1967). A new scale for the measurement of interpersonal trust. Journal of Personality, 35, 651–665.CrossRefGoogle Scholar
  45. Serholt, S. (2018). Breakdowns in children’s interactions with a robotic tutor: A longitudinal study. Computers in Human Behavior, 81, 250–264.CrossRefGoogle Scholar
  46. Serholt, S., & Barendregt, W. (2016). Robots tutoring children: Longitudinal evaluation of social engagement in child-robot interaction. In Proceedings of the 9th Nordic Conference on Human-Computer Interaction (NordiCHI ‘16), Gothenburg, Sweden.Google Scholar
  47. Sheridan, T. B. (1984). Supervisory control of remote manipulators, vehicles and dynamic processes: Experiments in command and display aiding. In W. B. Rouse (Ed.), Advances in man-machine systems research (Vol. 1, pp. 49–173). Greenwich, CT: JAI Press.Google Scholar
  48. Spence, P. R., Westerman, D., Edwards, C., & Edwards, A. (2014). Welcoming our robot overlords: Initial expectations about interaction with a robot. Communication Research Reports, 31, 272–280.CrossRefGoogle Scholar
  49. Sunnafrank, M. (1986). Predicted outcome value during initial interactions: A reformulation of uncertainty reduction theory. Human Communication Research, 13, 3–33.CrossRefGoogle Scholar
  50. Turner, J. H. (1988). A theory of social interaction. Stanford, CA: Stanford University Press.Google Scholar
  51. Wagner, A. R. (2015). Exploring human-robot trust: Insights from the first 1000 subjects. In Proceedings of the Collaboration Technologies and Systems Annual Conference (pp. 485–486). IEEE.Google Scholar
  52. Wang, N., Pynadath, D. V., & Hill, S. G. (2016). Trust calibration within a human-robot team: Comparing automatically generated explanations. In Proceedings of the 11th ACM/IEEE International Conference on Human Robot Interaction (pp. 109–116). IEEE.Google Scholar
  53. Westlund, J. M. K., Dickens, L., Jeong, S., Harris, P. L., DeSteno, D., & Breazeal, C. L. (2017). Children use non-verbal cues to learn new words from robots as well as people. International Journal of Child-Computer Interaction, 13, 1–9.CrossRefGoogle Scholar
  54. Yagoda, R. E., & Gillian, D. J. (2012). You want me to trust a robot? The development of a human-robot interaction trust scale. International Journal of Social Robotics, 4, 235–248.CrossRefGoogle Scholar
  55. Yamagishi, T., & Yamagishi, M. (1994). Trust and commitment in the United States and Japan. Motivation and Emotion, 18, 129–166.CrossRefGoogle Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2019

Authors and Affiliations

  1. 1.Chapman UniversityOrangeUSA

Personalised recommendations