Skip to main content

Human–Robot Interaction Through Robust Gaze Following

  • Conference paper
  • First Online:
Information Technology and Computational Physics (CITCEP 2016)

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 462))

  • 485 Accesses

Abstract

In this paper, a probabilistic solution for gaze following in the context of joint attention will be presented. Gaze following, in the sense of continuously measuring (with a greater or a lesser degree of anticipation) the head pose and gaze direction of an interlocutor so as to determine his/her focus of attention, is important in several important areas of computer vision applications, such as the development of nonintrusive gaze-tracking equipment for psychophysical experiments in Neuroscience, specialized telecommunication devices, Human–Computer Interfaces (HCI) and artificial cognitive systems for Human–Robot Interaction (HRI). We have developed a probabilistic solution that inherently deals with sensor models uncertainties and incomplete data. This solution comprises a hierarchical formulation of a set of detection classifiers that loosely follows how geometrical cues provided by facial features are used by the human perceptual system for gaze estimation. A quantitative analysis of the proposed architectures performance was undertaken through a set of experimental sessions. In these sessions, temporal sequences of moving human agents fixating a well-known point in space were grabbed by the stereovision setup of a robotic perception system, and then processed by the framework.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    http://rovis.unitbv.ro.

References

  1. Scassellati, B.: Theory of mind for a humanoid robot. Auton. Robots 12(1999), 13–24 (2002)

    Article  MATH  Google Scholar 

  2. Langton, S.R.H., Honeyman, H., Tessler, E.: The influence of head contour and nose angle on the perception of eye-gaze direction. Atten. Percept. Psychophys. 66(5), 752–771 (2004)

    Article  Google Scholar 

  3. Chutorian, E., Trivedi, M.: Head pose estimation in computer vision: a survey. IEEE Trans. Pattern Anal. Mach. Intell. 31(4), 607–629 (2009)

    Article  Google Scholar 

  4. Gee, A., Cipolla, R.: Determining the gaze of faces in images. Image Vis. Comput. 12(10), 639–647 (1994)

    Article  Google Scholar 

  5. Horprasert, T., Yacoob, Y., Davis, L.: Computing 3-d head orientation from a monocular image sequence. In: Proceedings of the Second International Conference on Automatic Face and Gesture Recognition, pp. 242–247, Oct 1996

    Google Scholar 

  6. Kaminski, J., Knaan, D., Shavit, A.: Single image face orientation and gaze detection. Mach. Vis. Appl. 21(3), 85–98 (2009)

    Article  Google Scholar 

  7. Nikolaidis, A., Pitas, I.: Facial feature extraction and pose determination. Pattern Recogn. 33(11), 1783–1791 (2000)

    Article  Google Scholar 

  8. Canton-Ferrer, C., Casas, J., Pardas, M.: Head orientation estimation using particle filtering in multiview scenarios. In: Multimodal Technologies for Perception of Humans, vol. 4625, pp. 317–327. Springer, Berlin (2008)

    Google Scholar 

  9. Pantic, M., Tomc, M., Rothkrantz, L.: A hybrid approach to mouth features detection. In: 2001 IEEE International Conference on Systems, Man, and Cybernetics, vol. 2, pp. 1188–1193 (2001)

    Google Scholar 

  10. Skodras, E., Fakotakis, N.: An unconstrained method for lip detection in color images. In: 2011 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 1013–1016 (2011)

    Google Scholar 

  11. Gonzalez-Ortega, D., Diaz-Pernas, F., Martinez-Zarzuela, M., Anton-Rodriguez, M., Diez-Higuera, J., Boto-Giralda, D.: Real-time nose detection and tracking based on adaboost and optical flow algorithms. In: Intelligent Data Engineering and Automated Learning, vol. 5788, pp. 142–150. Springer, Berlin (2009)

    Google Scholar 

  12. Werghi, N., Boukadia, H., Meguebli, Y., Bhaskar, H.: Nose detection and face extraction from 3d raw facial surface based on mesh quality assessment. In: 36th Annual Conference on IEEE Industrial Electronics Society, pp. 1161–1166 (2010)

    Google Scholar 

  13. Hansen, D., Ji, Q.: In the eye of the beholder: a survey of models for eyes and gaze. IEEE Trans. Pattern Anal. Mach. Intell. 32(3), 78–500 (2010)

    Google Scholar 

  14. Valenti, R., Sebe, N., Gevers, T.: Combining head pose and eye location information for gaze estimation. IEEE Trans. Image Process. (2011)

    Google Scholar 

  15. Ke, L., Kang, J.: Eye location method based on haar features. In: 2010 3rd International Congress on Image and Signal Processing, vol. 2, pp. 925–929 (2010)

    Google Scholar 

  16. Hassaballah, M., Kanazawa, T., Ido, S.: Efficient eye detection method based on grey intensity variance and independent components analysis. Comput. Vis. IET 4(4), 261–271 (2010)

    Article  Google Scholar 

  17. Reale, M., Canavan, S., Yin, L., Hu, K., Hung, T.: A multi-gesture interaction system using a 3-d iris disk model for gaze estimation and an active appearance model for 3-d hand pointing. IEEE Trans. Multimedia 13(3), 474–486 (2011)

    Article  Google Scholar 

  18. Beymer, D., Flickner, M.: Eye gaze tracking using an active stereo head. In: 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2, pp. 451–458 (2003)

    Google Scholar 

  19. Ronsse, R., White, O., Lefevre, P.: Computation of gaze orientation under unrestrained head movements. J. Neurosci. Methods 159, 158–169 (2007)

    Article  Google Scholar 

  20. Sung, J., Kanade, T., Kim, D.: Pose robust face tracking by combining active appearance models and cylinder head models. Int. J. Comput. Vis. 80, 260–274 (2008)

    Article  Google Scholar 

  21. Uřičář, M., Franc, V., Hlaváč, V.: Detector of facial landmarks learned by the structured output SVM. In: Csurka, G., Braz, J. (eds.) VISAPP ’12: Proceedings of the 7th International Conference on Computer Vision Theory and Applications, vol. 1, pp. 547–556. SciTePress—Science and Technology Publications, Portugal, Feb 2012

    Google Scholar 

  22. Hotz, L., Neumann, B., Terzic, K.: High-level expectations for low-level image processing. In: KI 2008: Advances in Artificial Intelligence. Springer, Berlin (2008)

    Google Scholar 

  23. Ristic, D.: Feedback structures in image processing. Ph.D. dissertation, Bremen University, Institute of Automation, Bremen, Germany, Apr 2007

    Google Scholar 

  24. Grigorescu, S.M.: Robust machine vision for service robotics. Ph.D. dissertation, Bremen University, Institute of Automation, Bremen, Germany, June 2010

    Google Scholar 

  25. Huang, G.B., Ramesh, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: a database for studying face recognition in unconstrained environments, University of Massachusetts, Amherst. Technical Report 07–49, Oct 2007

    Google Scholar 

Download references

Acknowledgements

We hereby acknowledge the structural founds project PRO-DD (POS-CCE, O.2.2.1., ID 123, SMIS 2637, ctr. No 11/2009) for providing the infrastructure used in this work.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sorin M. Grigorescu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing Switzerland

About this paper

Cite this paper

Grigorescu, S.M., Macesanu, G. (2017). Human–Robot Interaction Through Robust Gaze Following. In: Kulczycki, P., Kóczy, L., Mesiar, R., Kacprzyk, J. (eds) Information Technology and Computational Physics. CITCEP 2016. Advances in Intelligent Systems and Computing, vol 462. Springer, Cham. https://doi.org/10.1007/978-3-319-44260-0_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-44260-0_10

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-44259-4

  • Online ISBN: 978-3-319-44260-0

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics