Advertisement

Creativity in Measuring Trust in Human-Robot Interaction Using Interactive Dialogs

  • Halimahtun Khalid
  • Wei Shiung Liew
  • Bin Sheng Voong
  • Martin Helander
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 824)

Abstract

The measurement of human trust in humanoid robots in human-robot interaction requires novel approaches that can predict trust effectively. We present a method that mapped subjective measures (i.e. general trust, psychological) to objective measures (i.e. physiological) to predict trust. We designed interactive dialogs that represent real world service scenarios of Business, Disaster, and Healthcare. The dialogs embedded fifteen trust attributes of Ability, Benevolence and Integrity (ABI) in the communication dialogs. The ABI measures were mapped to physiological measures of facial expressions, voiced speech and camera-based heart rate. Forty-eight subjects comprising 24 males and 24 females aged between 18 to 36 years participated in the experiment. Half of the subjects were Malays and half were Chinese. Three humanoid robots represented full bodied, partial bodied and virtual agents. The experimental design was a within-subjects design. Each subject was tested on all robots in all scenarios. Subjects scored trust on an online scale that ranged from 0 to 7 points. The subjective data was analyzed using Univariate and Oneway MANOVA. The results found the humanoids to be trustworthy in different service tasks. The attributes of ‘Integrity’ and ‘Ability’ trust components are important in Business and Disaster scenarios. The estimation of trust was about 83% accurate when using this creative approach. In conclusion, humanoid robots can interact with humans using dialogs that are representative of real world communication.

Keywords

Trust Dialog design Human-Robot interaction 

Notes

Acknowledgement

We gratefully acknowledge the financial support by the US Air Force Office of Scientific Research (AFOSR), Washington D.C. and the US Aerospace Research and Development office (AOARD), Japan under Grant No. FA2386-14-1-0016.

References

  1. 1.
    Khalid HM, Shiung LW, Nooralishahi P, Rasool Z, Helander MG, Loo CK, Chin AV (2016) Exploring psycho-physiological correlates to trust: implications for human-robot-human interaction. In: Proceedings of the human factors and ergonomics society annual meeting, vol. 60, no. 1. SAGE, Los Angeles, pp 697–701CrossRefGoogle Scholar
  2. 2.
    Bartneck C, Forlizzi J (2004) Shaping human-robot interaction: understanding the social aspects of intelligent robotic products. In: CHI 2004 Extended abstracts on human factors in computing systems. ACM, pp 1731–1732Google Scholar
  3. 3.
    Enos F, Hirschberg J (2006) A framework for eliciting emotional speech: capitalizing on the actor’s process. In: First international workshop on emotion: corpora for research on emotion and affect (international conference on language resources and evaluation (LREC 2006)), pp 6–10Google Scholar
  4. 4.
    Busso C, Narayanan SS (2008) Scripted dialogs versus improvisation: lessons learned about emotional elicitation techniques from the IEMOCAP database. In: Ninth annual conference of the international speech communication associationGoogle Scholar
  5. 5.
    Krahmer E, Swerts M (2008) On the role of acting skills for the collection of simulated emotional speech. In: Ninth annual conference of the international speech communication associationGoogle Scholar
  6. 6.
    Busso C, Bulut M, Lee CC, Kazemzadeh A, Mower E, Kim S, Narayanan SS (2008) IEMOCAP: interactive emotional dyadic motion capture database. Lang Resour Eval 42(4):335CrossRefGoogle Scholar
  7. 7.
    Tanenbaum J. Being in the story: readerly pleasure, acting theory, and performing a role. In: International conference on interactive digital storytelling. Springer, Heidelberg, pp 55–66CrossRefGoogle Scholar
  8. 8.
    Mayer RC, Davis JH, Schoorman FD (1995) An integrative model of organizational trust. Acad Manage Rev 20(3):709–734CrossRefGoogle Scholar
  9. 9.
    Derrick DC (2011) Special-Purpose, Embodied Conversational Intelligence with Environmental Sensors (SPECIES) agents: implemented in an automated interviewing kiosk. The University of ArizonaGoogle Scholar
  10. 10.
    Elkins AC, Derrick DC (2013) The sound of trust: voice as a measurement of trust during interactions with embodied conversational agents. Group Decis Negot 22(5):897–913CrossRefGoogle Scholar
  11. 11.
    Ekman P, Rosenberg EL (eds) (1997) What the face reveals: basic and applied studies of spontaneous expression using the Facial Action Coding System (FACS). Oxford University Press, USAGoogle Scholar
  12. 12.
    Baltrušaitis T, Robinson P, Morency LP (2016) Openface: an open source facial behavior analysis toolkit. In: Applications of computer vision (WACV), IEEE winter conference. IEEE, pp 1–10Google Scholar
  13. 13.
    Liew WS, Khalid HM, Nooralishahi P, Rasool Z, Loo CK, Helander MG (2016) Determining the relationship between psychological and physiological measurements of human trust using rough set analysis. In: Industrial Engineering and Engineering Management (IEEM) international conference. IEEE, pp 1458–1462Google Scholar
  14. 14.
    Liew WS, Loo CK, Obo T (2017) Optimizing FELM ensembles using GA-BIC. In: Fuzzy systems association and 9th international conference on soft computing and intelligent systems (IFSA-SCIS), 2017 joint 17th world congress of international. IEEE, pp 1–6Google Scholar
  15. 15.
    Kononenko I (1994) Estimating attributes: analysis and extensions of RELIEF. In: European conference on machine learning, Springer, Heidelberg, pp 171–182CrossRefGoogle Scholar
  16. 16.
    Khalid H, Liew WS, Voong BS, Helander MG (2018) Trust of virtual agent in multi-actor interactions. In: International conference on artificial life and robotics (in press)CrossRefGoogle Scholar
  17. 17.
    Jonker CM, Schalken JJ, Theeuwes J, Treur J (2004) Human experiments in trust dynamics. In: International conference on trust management. Springer, Heidelberg, pp 206–220zbMATHGoogle Scholar
  18. 18.
    Perkins L, Miller JE, Hashemi A, Burns G (2010) Designing for human-centered systems: situational risk as a factor of trust in automation. In: Proceedings of the human factors and ergonomics society annual meeting, vol. 54, no. 25. SAGE Publications, Los Angeles, pp 2130–2134CrossRefGoogle Scholar
  19. 19.
    Berg J, Dickhaut J, McCabe K (1995) Trust, reciprocity, and social history. Games Econ Behav 10(1):122–142CrossRefGoogle Scholar
  20. 20.
    El Kaliouby R, Robinson P (2004) Mind reading machines: automated inference of cognitive mental states from video. In: 2004 IEEE international conference on systems, man and cybernetics, vol. 1. IEEE, pp 682–688Google Scholar
  21. 21.
    Zhong Y, Wang H, Ju KH, Jan KM, Chon KH (2004) Nonlinear analysis of the separate contributions of autonomic nervous systems to heart rate variability using principal dynamic modes. IEEE Trans Biomed Eng 51(2):255–262CrossRefGoogle Scholar
  22. 22.
    Lucas G, Stratou G, Lieblich S, Gratch J (2016) Trust me: multimodal signals of trustworthiness. In: Proceedings of the 18th ACM international conference on multimodal interaction. ACM, pp 5–12Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Halimahtun Khalid
    • 1
  • Wei Shiung Liew
    • 2
  • Bin Sheng Voong
    • 1
  • Martin Helander
    • 1
  1. 1.Damai SciencesKuala LumpurMalaysia
  2. 2.University of MalayaKuala LumpurMalaysia

Personalised recommendations