A Game for Eliciting Trust Between People and Devices Under Diverse Performance Conditions

  • Ingrid Zukerman
  • Andisheh Partovi
  • Kai Zhan
  • Nora Hamacher
  • Julie Stout
  • Masud Moshtaghi
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 818)

Abstract

In this paper, we introduce a web-based game designed to investigate how different conditions affect people’s trust in devices. The game is set in a retirement village, where residents live in smart homes equipped with monitoring systems. Players, who “work” in the village, need to trade-off the time spent on administrative tasks (which enable them to earn extra income) against the time spent ensuring the welfare of the residents. The scenario of the game is complex enough to support the investigation of the influence of various factors, such as system accuracy, type of error made by the system, and risk associated with events, on players’ trust in the monitoring system. In this paper, we describe the game and its theoretical underpinnings, and present preliminary results from a trial where players interacted with two systems that have different levels of accuracy.

Notes

Acknowledgments

The authors thank Matt Chen for his help in recording the training video, and Stephen Meagher for his assistance with the penalty estimations.

References

  1. 1.
    Bagheri, N., Jamieson, G.A.: The impact of context-related reliability on automation failure detection and scanning behaviour. In: 2004 IEEE International Conference on Systems, Man and Cybernetics, vol. 1, pp. 212–217. IEEE (2004)Google Scholar
  2. 2.
    Bean, N.H., Rice, S.C., Keller, M.D.: The effect of Gestalt psychology on the system-wide trust strategy in automation. In: Proceedings of the Human Factors and Ergonomics Society 55th Annual Meeting, pp. 1417–1421 (2011)Google Scholar
  3. 3.
    Beck, H.P., Dzindolet, M.T., Pierce, L.G.: Automation usage decisions: controlling intent and appraisal errors in a target detection task. J. Hum. Factors Ergon. Soc. 49(3), 429–437 (2007)CrossRefGoogle Scholar
  4. 4.
    Berg, J., Dickhaut, J., McCabe, K.: Trust, reciprocity, and social history. Games Econ. Behav. 10(1), 122–142 (1995)CrossRefMATHGoogle Scholar
  5. 5.
    Cook, D., Krishnan, N.: Mining the home environment. J. Intell. Inf. Syst. 43(3), 503–519 (2014)CrossRefGoogle Scholar
  6. 6.
    Dadashi, N., Stedmon, A., Pridmore, T.: Semi-automated CCTV surveillance: the effects of system confidence, system accuracy and task complexity on operator vigilance, reliance and workload. Appl. Ergon. 44(5), 730–738 (2013)CrossRefGoogle Scholar
  7. 7.
    de Melo, C., Gratch, J.: People show envy, not guilt, when making decisions with machines. In: International Conference on Affective Computing and Intelligent Interaction, pp. 315–321 (2015)Google Scholar
  8. 8.
    Dzindolet, M., Pierce, L., Peterson, S., Purcell, L., Beck, H.: The influence of feedback on automation use, misuse, and disuse. In: Proceedings of the Human Factors and Ergonomics Society 46th Annual Meeting, pp. 551–555 (2002)Google Scholar
  9. 9.
    Dzindolet, M.T., Peterson, S.A., Pomranky, R.A., Pierce, L.G., Beck, H.P.: The role of trust in automation reliance. Int. J. Hum. Comput. Stud. 58(6), 697–718 (2003)CrossRefGoogle Scholar
  10. 10.
    Gao, J., Lee, J.D.: Effect of shared information on trust and reliance in a demand forecasting task. In: Proceedings of the Human Factors and Ergonomics Society 50th Annual Meeting, pp. 215–219 (2006)Google Scholar
  11. 11.
    Gong, L.: How social is social responses to computers? The function of the degree of anthropomorphism in computer representations. Comput. Hum. Behav. 24(4), 1494–1509 (2008)CrossRefGoogle Scholar
  12. 12.
    Güth, W., Schmittberger, R., Schwarze, B.: An experimental analysis of ultimatum bargaining. J. Econ. Behav. Organ. 3(4), 367–388 (1982)CrossRefGoogle Scholar
  13. 13.
    Hoff, K., Bashir, M.: Trust in automation: integrating empirical evidence on factors that influence trust. Hum. Factors 57(3), 407–434 (2015)CrossRefGoogle Scholar
  14. 14.
    Jamieson, G.A., Wang, L., Neyedli, H.F.: Developing human-machine interfaces to support appropriate trust and reliance on automated combat identification systems. Technical report, DTIC Document (2008)Google Scholar
  15. 15.
    Kirchner, W.: Age differences in short-term retention of rapidly changing information. J. Exp. Psychol. 55(4), 352–358 (1958)CrossRefGoogle Scholar
  16. 16.
    Lacson, F.C., Wiegmann, D.A., Madhavan, P.: Effects of attribute and goal framing on automation reliance and compliance. In: Proceedings of the Human Factors and Ergonomics Society 49th Annual Meeting, pp. 482–486 (2005)Google Scholar
  17. 17.
    Lee, E.J.: Flattery may get computers somewhere, sometimes: the moderating role of output modality, computer gender, and user gender. Int. J. Hum. Comput. Stud. 66(11), 789–800 (2008)CrossRefGoogle Scholar
  18. 18.
    Madhavan, P., Wiegmann, D.A., Lacson, F.C.: Automation failures on tasks easily performed by operators undermine trust in automated aids. J. Hum. Factors Ergon. Soc. 48(2), 241–256 (2006)CrossRefGoogle Scholar
  19. 19.
    Moray, N., Inagaki, T., Itoh, M.: Adaptive automation, trust, and self-confidence in fault management of time-critical tasks. J. Exp. Psychol. Appl. 6(1), 44–58 (2000)CrossRefGoogle Scholar
  20. 20.
    Moshtaghi, M., Zukerman, I., Russell, R.: Statistical models for unobtrusively detecting abnormal periods of inactivity in older adults. User Model. User-Adap. Inter. 25(3), 231–265 (2015)CrossRefGoogle Scholar
  21. 21.
    Oduor, K.F., Wiebe, E.N.: The effects of automated decision algorithm modality and transparency on reported trust and task performance. In: Proceedings of the Human Factors and Ergonomics Society 52nd Annual Meeting, pp. 302–306 (2008)Google Scholar
  22. 22.
    Parasuraman, R., Miller, C.A.: Trust and etiquette in high-criticality automated systems. Commun. ACM 47(4), 51–55 (2004)CrossRefGoogle Scholar
  23. 23.
    Parasuraman, R., Riley, V.: Humans and automation: use, misuse, disuse, abuse. Hum. Factors 39(2), 230–253 (1997)CrossRefGoogle Scholar
  24. 24.
    Sanchez, J.: Factors that affect trust and reliance on an automated aid. Ph.D. thesis, Georgia Institute of Technology (2006)Google Scholar
  25. 25.
    Seong, Y., Bisantz, A.M.: The impact of cognitive feedback on judgment performance and trust with decision aids. Int. J. Ind. Ergon. 38(7), 608–625 (2008)CrossRefGoogle Scholar
  26. 26.
    Spain, R.D., Madhavan, P.: The role of automation etiquette and pedigree in trust and dependence. In: Proceedings of the Human Factors and Ergonomics Society 53rd Annual Meeting, pp. 339–343 (2009)Google Scholar
  27. 27.
    de Visser, E.J., Krueger, F., McKnight, P., Scheid, S., Smith, M., Chalk, S., Parasuraman, R.: The world is not enough: trust in cognitive agents. In: Proceedings of the Human Factors and Ergonomics Society 56th Annual Meeting, pp. 263–267 (2012)Google Scholar
  28. 28.
    Walliser, J.C., de Visser, E.J., Shaw, T.H.: Application of a system-wide trust strategy when supervising multiple autonomous agents. In: Proceedings of the Human Factors and Ergonomics Society 60th Annual Meeting, pp. 133–137 (2016)Google Scholar
  29. 29.
    Wang, L., Jamieson, G., Hollands, J.G.: The effects of design features on users’ trust in and reliance on a combat identification system. In: Proceedings of the Human Factors and Ergonomics Society 55th Annual Meeting, pp. 375–379 (2011)Google Scholar
  30. 30.
    Wang, L., Jamieson, G.A., Hollands, J.G.: Trust and reliance on an automated combat identification system. J. Hum. Factors Ergon. Soc. 51(3), 281–291 (2009)CrossRefGoogle Scholar
  31. 31.
    Yu, K., Berkovsky, S., Taib, R., Conway, D., Zhou, J., Chen, F.: User trust dynamics: an investigation driven by differences in system performance. In: IUI 2017 - Proceedings of the 22nd International Conference on Intelligent User Interfaces, pp. 307–317 (2017)Google Scholar
  32. 32.
    Zanatto, D., Patacchiola, M., Goslin, J., Cangelosi, A.: Priming anthropomorphism: can our trust in humanlike robots be transferred to non-humanlike robots? In: Proceeding of the 11th ACM/IEEE International Conference on Human Robot Interaction, pp. 543–544 (2016)Google Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  • Ingrid Zukerman
    • 1
  • Andisheh Partovi
    • 1
  • Kai Zhan
    • 1
  • Nora Hamacher
    • 2
  • Julie Stout
    • 3
  • Masud Moshtaghi
    • 4
  1. 1.Faculty of Information TechnologyMonash UniversityMelbourneAustralia
  2. 2.Monash Immersive Visualisation PlatformMonash UniversityMelbourneAustralia
  3. 3.Faculty of Medicine, Nursing and Health SciencesMonash UniversityMelbourneAustralia
  4. 4.School of Computing and Information SystemsThe University of MelbourneMelbourneAustralia

Personalised recommendations