Humanoid Robots as Interviewers for Automated Credibility Assessment
Humans are poor at detecting deception under the best conditions. The need for having a decision support system that can be a baseline for data-driven decision making is obvious. Such a system is not biased like humans are, and these often subconscious human biases can impair people’s judgment. A system for helping people at border security (CBP) is the AVATAR. The AVATAR, an Embodied Conversational agent (ECA), is implemented as a self-service kiosk. Our research uses this AVATAR as the baseline and we plan to augment the automated credibility assessment task that the AVATAR performs using a Humanoid robot. We will be taking advantage of humanoid robots’ capability of realistic dialogue and nonverbal gesturing. We are also capturing data from various sensors like microphones, cameras and an eye tracker that will help in model building and testing for the task of deception detection. We plan to carry out an experiment where we compare the results of an interview with the AVATAR and an interview with a humanoid robot. Such a comparative analysis has never been done before, hence we are very eager to conduct such a social experiment.
This research paper deals with the design and implementation plan for such an experiment. We also want to highlight what the considerations are while designing such a social experiment. It will help us understand how people perceive robot agent interactions in contrast to the more traditional ECA agents on screen. For example, does the physical presence of a robot encourage greater perceptions of likability, expertise, or dominance? Moreover, this research will address the question on which interaction model (ECA or robot) elicits the most diagnostic cues to detecting deception. This study may also prove very useful to researchers and organizations that want to use robots in increasing social roles and need to understand its societal and personal implications.
KeywordsHuman-Robot interaction Credibility assessment Social experiment with robots AI
- 4.Burgoon, J., et al.: Unobtrusive Deception Detection. In: Oxford Handbook of Affective Computing, pp. 503–515 (2014)Google Scholar
- 8.Edwards, A., et al.: How do patients in a medical interview perceive a robot versus human physician? In: Presented at the Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction (2017)Google Scholar
- 9.Elkins, A.C., et al.: Predicting users’ perceived trust in Embodied Conversational Agents using vocal dynamics. In: Proceedings of the Annual Hawaii International Conference on System Sciences, pp. 579–588 (2012)Google Scholar
- 10.Elkins, A.C., et al.: The voice and eye gaze behavior of an imposter: automated interviewing and detection for rapid screening at the border, pp. 49–54 (2012)Google Scholar
- 11.Elkins, A.C.: Vocalic markers of deception and cognitive dissonance for automated emotion detection systems. University of Arizona (2011)Google Scholar
- 12.Elkins, A.C., Stone, J.: The effect of cognitive dissonance on argument language and vocalics. In: Forty-Fourth Annual Hawaii International Conference on System Sciences, Koloa, Kauai, Hawaii (2011)Google Scholar
- 19.Yankee, W.: An investigation of sphygmomanometer discomfort thresholds in polygraph examinations. Police 9(6), 12 (1965)Google Scholar