Abstract
How would human users react to social robots that possess a theory of mind (ToM)? Would robots that can infer their users’ cognitions and emotions threaten their sense of uniqueness and evoke other negative reactions because ToM is a uniquely human trait? If so, can we alleviate these negative user reactions by framing robots as members of our ingroup? We addressed these questions with a 3 (robot’s affective ToM: correct vs. incorrect vs. control) × 2 (robot’s group membership: ingroup vs. outgroup) × 2 (user gender: female vs. male) between-subjects online experiment. Participants were asked to complete an online task with a robot named Pepper that was identified as an ingroup member or outgroup member. They first read a passage describing a past user’s interaction with Pepper, in which the user expressed sarcasm and Pepper correctly or incorrectly identified the user’s sarcasm or made a neutral comment. Males reacted more negatively to Pepper that correctly identified sarcasm and reported lower expected enjoyment with Pepper than females. Ingroup Pepper made participants feel closer to the robot but also threatened their sense of uniqueness than the outgroup Pepper. Design implications for fostering better human-robot interaction (HRI) are discussed.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Shamay-Tsoory, S.G., Aharon-Peretz, J.: Dissociable prefrontal networks for cognitive and affective theory of mind: a lesion study. Neuropsychologia 45(13), 3054–3067 (2007)
Call, J., Tomasello, M.: Does the chimpanzee have a theory of mind? 30 years later. Trends Cogn. Sci. 12(5), 187–192 (2008)
Hong, A.: Human-Robot Interactions for Single Robots and Multi-Robot Teams. MA Dissertation. University of Toronto, CA (2016)
Stephan, A.: Empathy for artificial agents. Int. J. Soc. Robot. 7(1), 111–116 (2015)
Mori, M.: The uncanny valley. Energy 7, 33–35 (1970)
Gray, K., Wegner, D.M.: Feeling robots and human zombies: Mind perception and the uncanny valley. Cognition 125(1), 125–130 (2012)
MacDoman, K.F., Vasudevan, S.K., Ho, C.C.: Does Japan really have robot mania? Comparing attitudes by implicit and explicit measures. AI Soc. 23(4), 485–510 (2009)
Kaplan, F.: Who is afraid of the humanoid? Investigating cultural differences in the acceptance of robots. Int. J. Hum. Robot. 1(3), 465–480 (2004)
Ferrari, F., Paladino, M.P., Jetten, J.: Blurring human–machine distinctions: anthropomorphic appearance in social robots as a threat to human distinctiveness. Int. J. Soc. Robot. 8(2), 287–302 (2016)
Stein, J.P., Liebold, B., Ohler, P.: Stay back, clever thing! Linking situational control and human uniqueness concerns to the aversion against autonomous technology. Comput. Hum. Behav. 95, 73–82 (2019)
Hogg, M.A.: Social identity theory. In: McKeown, S., Haji, R., Ferguson, N. (eds.) Understanding Peace and Conflict Through Social Identity Theory, pp. 3–17. Springer, Switzerland (2016). https://doi.org/10.1007/978-3-319-29869-6
Howard, J.W., Rothbart, M.: Social categorization and memory for in-group and out-group behavior. J. Pers. Soc. Psychol. 38(2), 301–310 (1980)
Levine, M., Prosser, A., Evans, D., Reicher, S.: Identity and emergency intervention: how social group membership and inclusiveness of group boundaries shape helping behavior. Pers. Soc. Psychol. Bull. 31(4), 443–453 (2005)
Häring, M., Kuchenbrandt, D., André, E.: Would you like to play with me?: How robots’ group membership and task features influence human-robot interaction. In: Proceedings of the 2014 International Conference on Human-Robot Interaction, pp. 9–16. ACM, Germany (2014)
Eyssel, F., Kuchenbrandt, D.: Social categorization of social robots: anthropomorphism as a function of robot group membership. Br. J. Soc. Psychol. 51(4), 724–731 (2012)
Heerink, M.: Exploring the influence of age, gender, education and computer experience on robot acceptance by older adults. In: Proceedings of the 6th International Conference on Human-Robot Interaction (HRI’11), pp. 147–148. ACM, New York (2011)
Arras, K.O., Cerqui, D.: Do we want to share our lives and bodies with robots? A 2000 people survey: a 2000-people survey. Technical Report, pp. 1–41 (2005)
Nass, C., Fogg, B.J., Moon, Y.: Can computers be teammates? Int. J. Hum. Comput. Stud. 45(6), 669–678 (1996)
Shamay-Tsoory, S.G., Tomer, R., Aharon-Peretz, J.: The neuroanatomical basis of understanding sarcasm and its relationship to social cognition. Neuropsychology 19(3), 288 (2005)
Buhrmester, M., Kwang, T., Gosling, S.D.: Amazon’s mechanical turk: a new source of inexpensive, yet high-quality, data? Persp. Psychol. Sci. 6(1), 3–5 (2011)
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Kang, J., Sundar, S.S. (2020). Social Robots with a Theory of Mind (ToM): Are We Threatened When They Can Read Our Emotions?. In: Novais, P., Lloret, J., Chamoso, P., Carneiro, D., Navarro, E., Omatu, S. (eds) Ambient Intelligence – Software and Applications –,10th International Symposium on Ambient Intelligence. ISAmI 2019. Advances in Intelligent Systems and Computing, vol 1006 . Springer, Cham. https://doi.org/10.1007/978-3-030-24097-4_10
Download citation
DOI: https://doi.org/10.1007/978-3-030-24097-4_10
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-24096-7
Online ISBN: 978-3-030-24097-4
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)