An Empirical Framework to Control Human Attention by Robot
Human attention control simply means that the shifting of one’s attention from one direction to another. To shift someone’s attention, gaining attention and meeting gaze are two most important pre-requisites. If a person would like to communicate with another, the person’s gaze should meet the receiver’s gaze, and they should make eye contact. However, it is difficult to set up eye contact when the two people are not facing each other in non-linguistic way. Therefore, the sender should perform some actions to capture the receiver’s attention so that they can meet face-to-face and establish eye contact. In this paper, we focus on what is the best action for a robot to attract human attention and how human and robot display gazing behavior each other for eye contact. In our system, the robot may direct its gaze toward a particular direction after making eye contact and the human will read the robot’s gaze. As a result, s/he will shift his/her attention to the direction indicated by the robot gaze. Experimental results show that the robot’s head motions can attract human attention, and the robot’s blinking when their gaze meet can make the human feel that s/he makes eye contact with the robot.
KeywordsTarget Person Head Direction Empirical Framework Robot Action Human Attention
Unable to display preview. Download preview PDF.
- 1.Goffman, E.: Behavior in public place. The Free Press, Glencoe (1963)Google Scholar
- 2.Miyauchi, D., Nakamura, A., Kuno, Y.: Bidirectional eye contact for human-robot communication. IEICE Transactions on Information and Systems, 2509–2516 (2005)Google Scholar
- 3.Yucel, Z., Salah, A.A., Mericli, C., Mericli, T.: Joint visual attention modeling for naturally interacting robotic agents. In: 14th International Symposium on Computer and Information Sciences, North Cyprus, vol. 14, pp. 242–247 (2009)Google Scholar
- 4.Mutlu, B., Shiwa, T., Kanda, T., Ishiguro, H., Hagita, N.: Footing in human-robot conversations: how robots might shape participant roles using gaze cues. In: 4th ACM/IEEE International Conference on Human Robot Interaction, La Jolla, California, USA, pp. 61–68 (2009)Google Scholar
- 6.Hanafiah, Z.M., Yamazaki, C., Nakamura, A., Kuno, Y.: Understanding inexplicit utterances using vision for helper robots. In: 17th International Conferenve on Pattern Recognition, Cambridge, UK, vol. 4, pp. 925–928 (2004)Google Scholar
- 8.Cranach, M.: The role of orienting behavior in human interaction. J. of Behavior and Environment, 217–237 (1971)Google Scholar
- 9.Argyle, M., Cook, M.: Gaze and mutual gaze. Cambridge University Press, London (1976)Google Scholar
- 10.Delaunay, F., Greeff, J.D., Belpaeme, T.: Towards retro-projected robot faces: an alternative to mechatronic and android faces. In: 18th IEEE International Symposium on Robot and Human Interactive Communication, Toyama, Japan, pp. 306–311 (2009)Google Scholar
- 11.FaceAPI: Face tracking for oem product development, http://www.facepi.com
- 12.Khan, A.Z., Blohm, G., McPeek, R.M., Lefvre, P.: Differential influence of attention on gaze and head movements. J. of Neurophysiology, 198–206 (2009)Google Scholar