Advertisement

Anthropomorphism: An Investigation of Its Effect on Trust in Human-Machine Interfaces for Highly Automated Vehicles

  • Erik Aremyr
  • Martin Jönsson
  • Helena Strömberg
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 823)

Abstract

Trust has been identified as a major factor in relation to user acceptance of Highly Automated Vehicles (HAV). A positive correlation has been suggested between increased trust and the use of anthropomorphic features in interfaces. However, more research is necessary to establish whether this is true in an HAV context. Thus, the aim of this study was to investigate how trust in HAVs is influenced by HMI design with different degrees of anthropomorphism: baseline, caricature, and human. Ten subjects participated in an in-vehicle trial to test the designs. The results showed no significant difference in levels of trust between conditions. Instead, it was found that anthropomorphism may affect user acceptance indirectly through its effect on perceived ease of use and usefulness. The findings imply that designers must be cautious when using anthropomorphism and consider adaptability and customisability to incorporate new and diverse user needs associated with the use of HAV.

Keywords

Highly Automated Vehicles Human – Machine Interaction Anthropomorphism 

References

  1. 1.
    Ghazizadeh M, Lee JD, Boyle LN (2012) Extending the technology acceptance model to assess automation. Cogn Technol Work 14(1):39–49CrossRefGoogle Scholar
  2. 2.
    Choi JK, Ji YG (2015) Investigating the importance of trust on adopting an autonomous vehicle. Int J Hum-Comput Int 31(10):692–702MathSciNetCrossRefGoogle Scholar
  3. 3.
    Schoettle B, Sivak M (2014) A survey of public opinion about autonomous and self-driving vehicles in the US, the UK, and Australia. Report. https://deepblue.lib.umich.edu/handle/2027.42/108384
  4. 4.
    Green BD (2010) Applying human characteristics of trust to animated anthropomorphic software agents. State University of New York at BuffaloGoogle Scholar
  5. 5.
    Pak R, Fink N, Price M, Bass B, Sturre L (2012) Decision support aids with anthropomorphic characteristics influence trust and performance in younger and older adults. Ergonomics 55(9):1059–1072CrossRefGoogle Scholar
  6. 6.
    de Visse EJ, Krueger F, McKnight P, Scheid A, Smith M, Chalk S, Parasuraman R (2012) The world is not enough: trust in cognitive agents. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting, vol 56, No 1, pp 263–267Google Scholar
  7. 7.
    Hoff KA, Bashir M (2015) Trust in automation integrating empirical evidence on factors that influence trust. Hum Factors 57(3):407–434CrossRefGoogle Scholar
  8. 8.
    Waytz A, Heafner J, Epley N (2014) The mind in the machine: anthropomorphism increases trust in an autonomous vehicle. J Exp Soc Psychol 52(5):113–117CrossRefGoogle Scholar
  9. 9.
    Kraus JM, Nothdurft F, Hock P, Scholz D, Minker W, Baumann M (2016) Human after all: effects of mere presence and social interaction of a humanoid robot as a co-driver in automated driving. In Adjunct Proceedings of AutomotiveUI 2016 Adjunct.  https://doi.org/10.1145/3004323.3004338
  10. 10.
    Lee J-G, Kim KJ, Lee S, Shin D-H (2015) Can autonomous vehicles be safe and trustworthy? effects of appearance and autonomy of unmanned driving systems. Int J Hum-Comput Int 31(10):682–691CrossRefGoogle Scholar
  11. 11.
    Häuslschmid R, von Bülow M, Pfleging B, Butz A (2017) Supporting trust in autonomous driving. In: Proceedings of the 22nd International Conference on Intelligent User Interfaces (IUI 2017).  https://doi.org/10.1145/3025171.3025198
  12. 12.
    Zhang T, Zhu B, Lee L, Kaber D (2008) Service robot anthropomorphism and interface design for emotion in human-robot interaction. IEEE International Conference on Automation Science and Engineering, 23–26 Aug 2008, Paper presented at the 2008Google Scholar
  13. 13.
    Ekman F, Johansson M (2015) Creating appropriate trust for autonomous vehicles. M.Sc Thesis. Chalmers University of TechnologyGoogle Scholar
  14. 14.
    Helldin T, Falkman G, Riveiro M, Davidsson S (2013). Presenting system uncertainty in automotive UIs for supporting trust calibration in autonomous driving. In: Proceedings of AutoUI 2013. http://dx.doi.org/10.1145/2516540.251655
  15. 15.
    Brooke J (1996) SUS - A quick and dirty usability scale. Usability Eval Ind 189(194):4–7Google Scholar
  16. 16.
    Bartneck C, Kulić D, Croft E, Zoghbi S (2009) Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. Int J Soc Robot 1(1):71–81CrossRefGoogle Scholar
  17. 17.
    Lowry R (1998) Concepts and Applications of Inferential Statistics. 1st edn. Subchapter 12a. The Wilcoxon Signed-Rank TestGoogle Scholar
  18. 18.
    Deniaud C, Honnet V, Jeanne B, Mestre D (2015) The concept of “presence” as a measure of ecological validity in driving simulators. J Interact Sci 3:1.  https://doi.org/10.1186/s40166-015-0005-z7CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Chalmers University of TechnologyGothenburgSweden

Personalised recommendations