Advertisement

Human Trust Factors in Image Analysis

Conference paper
  • 842 Downloads
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 778)

Abstract

Advancements in robotics and machine learning technologies have increased the prevalence of human-machine interactions and collaborations in the workplace. Several studies have identified trust as a major factor in how efficiently human-machine interactions occur and in how errors are recognized and handled. Little work has been done to identify how this human-machine trust compares to human-human trust, and how an individual’s preference for human-sourced information may interfere with their human-machine relationships, and vice versa. Outside the workplace, people consume media that has become saturated by altered and out-of-context imagery. Thus, our ability to evaluate the veracity of graphical information has been compromised. Our experiment seeks to identify factors of implicit bias in how humans analyze information when it comes from a machine (algorithm), or from a human (subject-area expert). Our results highlight the need for developing a cultural computational literacy.

Keywords

Algorithmic bias Computational literacy Data literacy Human factors Human-systems integration 

References

  1. 1.
    Holliday, D., Wilson, S., Stumpf, S.: User trust in intelligent systems: a journey over time. In: 21st International Conference on Intelligent User Interfaces, IUI 2016, pp. 164–168 (2016)Google Scholar
  2. 2.
    Friedman, B., Nissenbaum, H.: Bias in computer systems. ACM Trans. Inf. Syst. 14, 330–347 (1996)CrossRefGoogle Scholar
  3. 3.
    Zanbaka, C., Ulinski, A., Goolkasian, P., Hodges, L.F.: Social responses to virtual humans: implications for future interface design. In: CHI 2007 Proceedings Social Influence, pp. 1561–1570 (2007)Google Scholar
  4. 4.
    Lewandowsky, S., Mundy, M., Tan, G.P.: The dynamics of trust: comparing humans to automation. J. Exp. Psychol. Appl. 6, 104–123 (2000)CrossRefGoogle Scholar
  5. 5.
    Pu, P., Chen, L.: Trust building with explanation interfaces. In: Proceedings of 11th International Conference Intelligent User Interfaces, IUI 2006, pp. 93–100 (2006)Google Scholar
  6. 6.
    Sanders, T.L., Schafer, K.E., Volante, W., Reardon, A., Hancock, P.A.: Implicit attitudes toward robots, pp. 1746–1749 (2016)CrossRefGoogle Scholar
  7. 7.
    Dietvorst, B.J., Simmons, J.P., Massey, C.: Algorithm aversion: People erroneously avoid algorithms after seeing them err. J. Exp. Psychol. Gen. 144, 114–126 (2015)CrossRefGoogle Scholar
  8. 8.
    Salem, M., Lakatos, G., Amirabdollahian, F., Dautenhahn, K.: Would you trust a (faulty) robot? In: Proceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction, HRI 2015, pp. 141–148. ACM Press, New York (2015)Google Scholar
  9. 9.
    Schetinger, V., Oliveira, M.M., da Silva, R., Carvalho, T.J.: Humans are easily fooled by digital images. Comput. Graph. 68, 142–151 (2017)CrossRefGoogle Scholar
  10. 10.
    Greenwald, A.G., Krieger, L.H.: Implicit bias: scientific foundations. Calif. Law Rev. 94 (2006)CrossRefGoogle Scholar
  11. 11.
    Greenwald, A.G., McGhee, D.E., Schwartz, J.L.K.: Measuring individual differences in implicit cognition: the implicit association test. J. Pers. Soc. Psychol. 74, 1464–1480 (1998)CrossRefGoogle Scholar
  12. 12.
    Ullman, D., Malle, B.F.: Human-robot trust. In: Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, HRI 2017. pp. 309–310. ACM Press, New York (2017)Google Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2019

Authors and Affiliations

  1. 1.Computer Information Sciences and EngineeringHerbert Wertheim College of Engineering, University of FloridaGainesvilleUSA

Personalised recommendations