Advertisement

Humanoid Robots as Interviewers for Automated Credibility Assessment

  • Aaron C. ElkinsEmail author
  • Amit GupteEmail author
  • Lance CameronEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11589)

Abstract

Humans are poor at detecting deception under the best conditions. The need for having a decision support system that can be a baseline for data-driven decision making is obvious. Such a system is not biased like humans are, and these often subconscious human biases can impair people’s judgment. A system for helping people at border security (CBP) is the AVATAR. The AVATAR, an Embodied Conversational agent (ECA), is implemented as a self-service kiosk. Our research uses this AVATAR as the baseline and we plan to augment the automated credibility assessment task that the AVATAR performs using a Humanoid robot. We will be taking advantage of humanoid robots’ capability of realistic dialogue and nonverbal gesturing. We are also capturing data from various sensors like microphones, cameras and an eye tracker that will help in model building and testing for the task of deception detection. We plan to carry out an experiment where we compare the results of an interview with the AVATAR and an interview with a humanoid robot. Such a comparative analysis has never been done before, hence we are very eager to conduct such a social experiment.

This research paper deals with the design and implementation plan for such an experiment. We also want to highlight what the considerations are while designing such a social experiment. It will help us understand how people perceive robot agent interactions in contrast to the more traditional ECA agents on screen. For example, does the physical presence of a robot encourage greater perceptions of likability, expertise, or dominance? Moreover, this research will address the question on which interaction model (ECA or robot) elicits the most diagnostic cues to detecting deception. This study may also prove very useful to researchers and organizations that want to use robots in increasing social roles and need to understand its societal and personal implications.

Keywords

Human-Robot interaction Credibility assessment Social experiment with robots AI 

References

  1. 1.
    Ben-Shakhar, G., et al.: Trial by polygraph: scientific and juridical issues in lie detection. Behav. Sci. Law 4(4), 459–479 (1986)CrossRefGoogle Scholar
  2. 2.
    Bond Jr., C.F., DePaulo, B.M.: Accuracy of deception judgments. Pers. Soc. Psychol. Rev. 10(3), 214–234 (2006)CrossRefGoogle Scholar
  3. 3.
    Buller, D.B., Burgoon, J.K.: Interpersonal deception theory. Commun. Theory 6, 203–242 (1996)CrossRefGoogle Scholar
  4. 4.
    Burgoon, J., et al.: Unobtrusive Deception Detection. In: Oxford Handbook of Affective Computing, pp. 503–515 (2014)Google Scholar
  5. 5.
    Dautenhahn, K., et al.: Robot-mediated interviews - how effective is a humanoid robot as a tool for interviewing young children? PLoS One 8(3), e59448 (2013)CrossRefGoogle Scholar
  6. 6.
    Derrick, D.C., et al.: Border security credibility assessments via heterogeneous sensor fusion. IEEE Intell. Syst. 25, 41–49 (2010)CrossRefGoogle Scholar
  7. 7.
    Derrick, D.C., et al.: Embodied conversational agent-based kiosk for automated interviewing. J. Manag. Inf. Syst. 28(1), 17–48 (2011)CrossRefGoogle Scholar
  8. 8.
    Edwards, A., et al.: How do patients in a medical interview perceive a robot versus human physician? In: Presented at the Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction (2017)Google Scholar
  9. 9.
    Elkins, A.C., et al.: Predicting users’ perceived trust in Embodied Conversational Agents using vocal dynamics. In: Proceedings of the Annual Hawaii International Conference on System Sciences, pp. 579–588 (2012)Google Scholar
  10. 10.
    Elkins, A.C., et al.: The voice and eye gaze behavior of an imposter: automated interviewing and detection for rapid screening at the border, pp. 49–54 (2012)Google Scholar
  11. 11.
    Elkins, A.C.: Vocalic markers of deception and cognitive dissonance for automated emotion detection systems. University of Arizona (2011)Google Scholar
  12. 12.
    Elkins, A.C., Stone, J.: The effect of cognitive dissonance on argument language and vocalics. In: Forty-Fourth Annual Hawaii International Conference on System Sciences, Koloa, Kauai, Hawaii (2011)Google Scholar
  13. 13.
    Garrido, E., et al.: Police officers’ credibility judgments: Accuracy and estimated ability. Int. J. Psychol. 39(4), 254–275 (2004)MathSciNetCrossRefGoogle Scholar
  14. 14.
    Horvath, F.: Job screening. Society 22(6), 43–46 (1985)CrossRefGoogle Scholar
  15. 15.
    Latikka, R., et al.: Self-efficacy and acceptance of robots. Comput. Hum. Behav. 93, 157–163 (2019)CrossRefGoogle Scholar
  16. 16.
    Synnott, J., et al.: A review of the polygraph: history, methodology and current status. Crime Psychol. Rev. 1(1), 59–83 (2015)CrossRefGoogle Scholar
  17. 17.
    Taddeo, M., Floridi, L.: How AI can be a force for good. Science 361(6404), 751LP–752LP (2018)MathSciNetCrossRefGoogle Scholar
  18. 18.
    Twyman, N.W., et al.: A rigidity detection system for automated credibility assessment. J. Manag. Inf. Syst. 31(1), 173–202 (2014)CrossRefGoogle Scholar
  19. 19.
    Yankee, W.: An investigation of sphygmomanometer discomfort thresholds in polygraph examinations. Police 9(6), 12 (1965)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.San Diego State University Artificial Intelligence LabSan Diego State UniversitySan DiegoUSA

Personalised recommendations