Advertisement

Gaze Tracing in a Bounded Log-Spherical Space for Artificial Attention Systems

  • Beatriz Oliveira
  • Pablo Lanillos
  • João Filipe Ferreira
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 418)

Abstract

Human gaze is one of the most important cue for social robotics due to its embedded intention information. Discovering the location or the object that an interlocutor is staring at, gives the machine some insight to perform the correct attentional behaviour. This work presents a fast voxel traversal algorithm for estimating the potential locations that a human is gazing. Given a 3D occupancy map in log-spherical coordinates and the gaze vector, we evaluate the regions that are relevant for attention by computing the set of intersected voxels between an arbitrary gaze ray in the 3D space and a log-spherical bounded section defined by \(\rho \in (\rho _{min},\rho _{max}),\theta \in (\theta _{min},\theta _{max} ),\phi \in (\phi _{min},\phi _{max})\). The first intersected voxel is computed in closed form and the rest are obtained by binary search guaranteeing no repetitions in the intersected set. The proposed method is motivated and validated within a human-robot interaction application: gaze tracing for artificial attention systems.

Keywords

Human-Robot Interaction (HRI) Artificial attention Gaze tracing Log-spherical Voxel traversal algorithm 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Amanatides, J., Woo, A., et al.: A fast voxel traversal algorithm for ray tracing. Eurographics 87, 3–10 (1987)Google Scholar
  2. 2.
    Brooks, R., Meltzoff, A.N.: The development of gaze following and its relation to language. Developmental Science 8(6), 535–543 (2005)CrossRefGoogle Scholar
  3. 3.
    Ferreira, J.F., Dias, J.: Attentional Mechanisms for Socially Interactive Robots - A Survey. IEEE Transactions on Autonomous Mental Development 6(2), 110–125 (2014)CrossRefGoogle Scholar
  4. 4.
    Ferreira, J.F., Lobo, J., Bessire, P., Castelo-Branco, M., Dias, J.: A Bayesian Framework for Active Artificial Perception. IEEE Transactions on Cybernetics (Systems Man and Cybernetics, part B) 43(2), 699–711 (2013)Google Scholar
  5. 5.
    Kuhn, B., Schauerte, B., Kroschel, K., Stiefelhagen, R.: Multimodal saliency-based attention: a lazy robot’s approach. In: 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 807–814. IEEE (2012)Google Scholar
  6. 6.
    Lanillos, P., Ferreira, J.F., Dias, J.: Evaluating the influence of automatic attentional mechanisms in human-robot interaction. In: 9th ACM/IEEE International Conference on Workshop: A Bridge Between Robotics and Neuroscience Workshop in Human-Robot Interaction, Bielefeld, Germany, pp. 1–2, March 2014Google Scholar
  7. 7.
    Lanillos, P., Ferreira, J.F., Dias, J.: Designing an artificial attention system for social robots. In: 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE (2015) (to appear)Google Scholar
  8. 8.
    Lanillos, P., Ferreira, J.F., Dias, J.: Multisensory 3D saliency for artificial attention systems. In: 3rd Workshop on Recognition and Action for Scene Understanding (REACTS), 16th International Conference of Computer Analysis of Images and Patterns (CAIP), pp. 1–6 (2015)Google Scholar
  9. 9.
    Murphy-Chutorian, E., Trivedi, M.M.: Head pose estimation in computer vision: A survey. IEEE Transactions on Pattern Analysis and Machine Intelligence 31(4), 607–626 (2009)CrossRefGoogle Scholar
  10. 10.
    Nagai, Y.: Joint attention development in infant-like robot based on head movement imitation. In: Proc. Third Int. Symposium on Imitation in Animals and Artifacts (AISB 2005), pp. 87–96. Citeseer (2005)Google Scholar
  11. 11.
    Santos, L., Christophorou, C., Christodoulou, E., Samaras, G., Dias, J.: Development strategy of an architecture for e-health personalised service robots. IADIS International Journal on Computer Science and Information Systems 9, 1–18 (2014)Google Scholar
  12. 12.
    Scassellati, B.: Theory of mind for a humanoid robot. Autonomous Robots 12(1), 13–24 (2002)CrossRefzbMATHGoogle Scholar
  13. 13.
    Schillaci, G., Bodiroža, S., Hafner, V.V.: Evaluating the effect of saliency detection and attention manipulation in human-robot interaction. International Journal of Social Robotics 5(1), 139–152 (2013)CrossRefGoogle Scholar
  14. 14.
    Siddon, R.L.: Fast calculation of the exact radiological path for a three-dimensional CT array. Medical Physics 12(2), 252–255 (1985)CrossRefGoogle Scholar
  15. 15.
    Thibaudeau, C., Leroux, J.D., Fontaine, R., Lecomte, R.: Fully 3D iterative CT reconstruction using polar coordinates. Medical Physics 40(11), 111904 (2013)CrossRefGoogle Scholar
  16. 16.
    Vernon, D., von Hofsten, C., Fadiga, L.: A roadmap for cognitive development in humanoid robots, vol. 11. Springer (2010)Google Scholar
  17. 17.
    Vinciarelli, A., Pantic, M., Heylen, D., Pelachaud, C., Poggi, I., D’Errico, F., Schroeder, M.: Bridging the gap between social animal and unsocial machine: A survey of social signal processing. IEEE Transactions on Affective Computing 3(1), 69–87 (2012)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  • Beatriz Oliveira
    • 1
  • Pablo Lanillos
    • 1
  • João Filipe Ferreira
    • 1
  1. 1.AP4ISR Team, Institute of Systems and Robotics (ISR)University of CoimbraCoimbraPortugal

Personalised recommendations