Peripheral Visual Field, Fixation and Direction of Heading

  • Inigo Thomas
  • Eero Simoncelli
  • Ruzena Bajcsy
Part of the Springer Series in Perception Engineering book series (SSPERCEPTION)


Although moving human observers actively fixate points in the world with their eyes, computer vision algorithms designed for the estimation of structure-from-motion or egomotion typically do not make use of this constraint. In this paper, we investigate the computational advantage of fixation. The main contribution of this work is to specify precisely the form of the optical flow field for a fixating observer moving in a rigid world. In particular, we show that the use of a hemispherical (retinal) imaging surface combined with the active process of fixation generates an optical flow field of a particularly simple form. A further contribution is the finding that the sign of retinal flow at the retinal periphery can be used to predict collisions.


Optical Flow North Pole Critical Plane Zero Flow Longitudinal Flow 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. Abraham, R. H. & Shaw, C. D. (1993). Dynamics: The Geometry of Behavior. Redwood City, CA: Addison-Wesley.Google Scholar
  2. Bajcsy, R. (1985). Active perception vs. passive perception. In Proceedings of the IEEE Workshop on Computer Vision: Representation and Control (pp. 55–59). Washington, DC: IEEE Computer Society.Google Scholar
  3. Bajcsy, R. (1988). Active perception. Proceedings of the IEEE, Special Issue on Computer Vision, 76, 996–1005.Google Scholar
  4. Fermüller, C. (1993). Navigational preliminaries. In Y. Aloimonos (Ed.), Active Perception (pp. 103–150). Hillsdale, NJ: Lawrence Erlbaum.Google Scholar
  5. Hager, G. D. (1988). Active Reduction of Uncertainty in Multi-Sensor Systems. PhD thesis, Department of Computer and Information Science, University of Pennsylvania, Philadelphia, PA. Technical Report MS-CIS-88–47, GRASP LAB 147.Google Scholar
  6. Krotkov, E. P. (1989). Active Computer Vision by Cooperative Focus and Stereo. New York: Springer Verlag.MATHGoogle Scholar
  7. Maver, J. & Bajcsy, R. (1993). Occlusions as a guide for planning the next view. IEEE Transactions on Pattern Analysis and Machine Intelligence, 15, 417–433.CrossRefGoogle Scholar
  8. Pahlavan, K., Uhlin, T. & Eklundh, J. O. (1993). Dynamic fixation. In Proceedings of the Fourth International Conference on Computer Vision (pp. 412–419). Berlin: Springer-Verlag.Google Scholar
  9. Raviv, D. & Herman, M. (1990). Towards an understanding of camera fixation. In Proceedings of the 1990 IEEE International Conference on Robotics and Automation (pp. 28–33). Los Alamitos, California: IEEE Computer Society.Google Scholar
  10. Simoncelli, E. (1993). Distributed Representation and Analysis of Visual Motion. PhD thesis, Department of Electrical Engineering, Massachusetts Institute of Technology, Cambridge, MA. Vision and Modeling Group Technical Report 209, MIT Media Laboratory.Google Scholar
  11. Thomas, I. (1993). Reducing Noise in 3D Models Recovered from a Sequence of 2D Images. PhD thesis, Department of Computer Science, University of Massachusetts, Amherst, MA.Google Scholar
  12. Thomas, I., Simoncelli, E. & Bajcsy, R. (1994). Spherical retinal flow for a fixating observer. In Proceedings of the Workshop on Visual Behaviors (pp. 37–44). Los Alamitos, California: IEEE Computer Society.CrossRefGoogle Scholar
  13. Yarbus, A. L. (1967). Eye Movements and Vision. New York: Plenum Press.Google Scholar

Copyright information

© Springer-Verlag New York, Inc. 1996

Authors and Affiliations

  • Inigo Thomas
    • 1
  • Eero Simoncelli
    • 1
  • Ruzena Bajcsy
    • 1
  1. 1.Department of Computer ScienceUniversity of PennsylvaniaUSA

Personalised recommendations