Abstract
Navigating the visual world is a challenging problem for autonomous agents, which must be flexible, robust, and preferably easily extensible in order to meet changing task demands. Here, we outline the rationale for an approach to constructing such agents biomimetically, even if they appear ‘over engineered’ at first glance, using the problem of gaze redirection in an attentional task.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
Chambers, J.M.: Deciding where to look: A study of action selection in the oculomotor system. Ph.D. thesis, Sheffield: University of Sheffield (2007)
Itti, L., Koch, C., Niebur, E.: A model of saliency-based visual attention for rapid scene analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence 20(11), 1254–1259 (1998)
Yarbus, A.L.: Eye Movements and vision. Plenum Press, New York (1967)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2011 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Cope, A., Chambers, J., Gurney, K. (2011). A Systems Integration Approach to Creating Embodied Biomimetic Models of Active Vision. In: Groß, R., Alboul, L., Melhuish, C., Witkowski, M., Prescott, T.J., Penders, J. (eds) Towards Autonomous Robotic Systems. TAROS 2011. Lecture Notes in Computer Science(), vol 6856. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-23232-9_33
Download citation
DOI: https://doi.org/10.1007/978-3-642-23232-9_33
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-23231-2
Online ISBN: 978-3-642-23232-9
eBook Packages: Computer ScienceComputer Science (R0)