Advertisement

An Empirical Framework to Control Human Attention by Robot

  • Mohammed Moshiul Hoque
  • Tomami Onuki
  • Emi Tsuburaya
  • Yoshinori Kobayashi
  • Yoshinori Kuno
  • Takayuki Sato
  • Sachiko Kodama
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6468)

Abstract

Human attention control simply means that the shifting of one’s attention from one direction to another. To shift someone’s attention, gaining attention and meeting gaze are two most important pre-requisites. If a person would like to communicate with another, the person’s gaze should meet the receiver’s gaze, and they should make eye contact. However, it is difficult to set up eye contact when the two people are not facing each other in non-linguistic way. Therefore, the sender should perform some actions to capture the receiver’s attention so that they can meet face-to-face and establish eye contact. In this paper, we focus on what is the best action for a robot to attract human attention and how human and robot display gazing behavior each other for eye contact. In our system, the robot may direct its gaze toward a particular direction after making eye contact and the human will read the robot’s gaze. As a result, s/he will shift his/her attention to the direction indicated by the robot gaze. Experimental results show that the robot’s head motions can attract human attention, and the robot’s blinking when their gaze meet can make the human feel that s/he makes eye contact with the robot.

Keywords

Target Person Head Direction Empirical Framework Robot Action Human Attention 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Goffman, E.: Behavior in public place. The Free Press, Glencoe (1963)Google Scholar
  2. 2.
    Miyauchi, D., Nakamura, A., Kuno, Y.: Bidirectional eye contact for human-robot communication. IEICE Transactions on Information and Systems, 2509–2516 (2005)Google Scholar
  3. 3.
    Yucel, Z., Salah, A.A., Mericli, C., Mericli, T.: Joint visual attention modeling for naturally interacting robotic agents. In: 14th International Symposium on Computer and Information Sciences, North Cyprus, vol. 14, pp. 242–247 (2009)Google Scholar
  4. 4.
    Mutlu, B., Shiwa, T., Kanda, T., Ishiguro, H., Hagita, N.: Footing in human-robot conversations: how robots might shape participant roles using gaze cues. In: 4th ACM/IEEE International Conference on Human Robot Interaction, La Jolla, California, USA, pp. 61–68 (2009)Google Scholar
  5. 5.
    Nagi, Y., Hosoda, K., Morita, A., Asada, M.: A constructive model for the development of joint attention. J. of Connection Science 15, 211–229 (2003)CrossRefGoogle Scholar
  6. 6.
    Hanafiah, Z.M., Yamazaki, C., Nakamura, A., Kuno, Y.: Understanding inexplicit utterances using vision for helper robots. In: 17th International Conferenve on Pattern Recognition, Cambridge, UK, vol. 4, pp. 925–928 (2004)Google Scholar
  7. 7.
    Sugiyama, O., Kanda, T., Imai, M., Ishiguro, H., Hagita, N.: Human-like conversation with gestures and verbal cues based on three-layer attention drawing mode. J. of Connection Science 18, 379–402 (2006)CrossRefGoogle Scholar
  8. 8.
    Cranach, M.: The role of orienting behavior in human interaction. J. of Behavior and Environment, 217–237 (1971)Google Scholar
  9. 9.
    Argyle, M., Cook, M.: Gaze and mutual gaze. Cambridge University Press, London (1976)Google Scholar
  10. 10.
    Delaunay, F., Greeff, J.D., Belpaeme, T.: Towards retro-projected robot faces: an alternative to mechatronic and android faces. In: 18th IEEE International Symposium on Robot and Human Interactive Communication, Toyama, Japan, pp. 306–311 (2009)Google Scholar
  11. 11.
    FaceAPI: Face tracking for oem product development, http://www.facepi.com
  12. 12.
    Khan, A.Z., Blohm, G., McPeek, R.M., Lefvre, P.: Differential influence of attention on gaze and head movements. J. of Neurophysiology, 198–206 (2009)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Mohammed Moshiul Hoque
    • 1
  • Tomami Onuki
    • 1
  • Emi Tsuburaya
    • 1
  • Yoshinori Kobayashi
    • 1
  • Yoshinori Kuno
    • 1
  • Takayuki Sato
    • 2
  • Sachiko Kodama
    • 2
  1. 1.Graduate School of Science and EngineeringSaitama UniversitySaitamaJapan
  2. 2.Graduate School of Informatics and EngineeringThe University of Electro-CommunicationsChofuJapan

Personalised recommendations