Focus of Expansion Localization through Inverse C-Velocity

  • Adrien Bak
  • Samia Bouchafa
  • Didier Aubert
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6978)

Abstract

The Focus of Expansion (FoE) sums up all the available information on translational ego-motion for monocular systems. It has also been shown to present interesting features in cognitive research. As such, its localization bears great importance, either for robotic applications, as well as for attention fixation research. It will be shown that the so-called C-Velocity framework can be inversed in order to extract the FoE position from a rough scene structure estimation. This method rely on robust cumulative framework and only exploit the optical flow field relative norm as such, it is robust to angular noise and bias on the absolute optical flow norm.

Keywords

Optical Flow Expansion Localization Building Plane Scene Structure Road Plane 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Bouchafa, S., Zavidovique, B.: C-Velocity: A Cumulative Frame to Segment Objects From Ego-Motion. Pattern Recognition and Image Analysis 19, 583–590 (2009)CrossRefGoogle Scholar
  2. 2.
    McCarthy, C., Barnes, N., Mahony, R.: A Robust Docking Strategy for a Mobile Robot using Flow Field Divergence. IEEE Transactions on Robotics 24, 832–842 (2008)CrossRefGoogle Scholar
  3. 3.
    Ancona, N., Poggio, T.: Optical flow from 1d correlation: Application to a simple time-to-crash detector. In: Proceedings of the International Conference on Computer Vision, pp. 209–214 (1993)Google Scholar
  4. 4.
    Fukuchi, M., Tsuchiya, T., Koch, C.: The focus of expansion in optical flow fields acts as a strong cue for visual attention. Journal of Vision 9(8), article 137 (2009)Google Scholar
  5. 5.
    Prazdny, K.: Motion and Structure from Optical Flow. In: Proceedings of the Sixth International Joint Conference on Artificial Intelligence (1979)Google Scholar
  6. 6.
    Roach, J.W., Aggarwal, J.K.: Determining the Movement of Objects from a Sequence of Images. IEEE Transactions on Pattern Analysis and Machine Intelligence 2, 554–562 (1980)CrossRefGoogle Scholar
  7. 7.
    Suhr, J.K., Jung, H.G., Bae, K., Kim, J.: Outlier rejection for cameras on intelligent vehicles. Pattern Recognition Letters 29, 828–840 (2008)CrossRefGoogle Scholar
  8. 8.
    Sazbon, D., Rotstein, H., Rivlin, E.: Finding the focus of expansion and estimating range using optical flow images and a matched filter. Machine Vision and Applications 15, 229–236 (2004)CrossRefGoogle Scholar
  9. 9.
    Hummel, R., Sundareswaran, V.: Motion-parameter Estiamtion from Global Flow Field Data. IEEE Transactions on Pattern Analysis and Machine Intelligence 15(5), 459–476 (1993)CrossRefGoogle Scholar
  10. 10.
    Wu, F.C., Wang, L., Hu, Z.Y.: FOE Estimation: Can Image Measurement Errors Be Totally ”Corrected” by the Geometric Method? Pattern Recognition 40(7), 1971–1980 (2007)CrossRefMATHGoogle Scholar
  11. 11.
    Baker, S., Scharstein, D., Lewis, J.P., Roth, S., Black, M.J., Szeliski, R.: A Database and Evaluation Methodology for Optical Flow. In: IEEE International Conference on Computer Vision, pp. 1–8 (2007)Google Scholar
  12. 12.
    Rodriguez, S.A., Frémont, F.V., Bonnifait, P.: Extrinsic Calibration Between a Multi-Layer LIDAR and a Camera. In: Proceedings of the IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems, pp. 214–219 (2008)Google Scholar
  13. 13.
    Iocchi, L., Konolige, K., Bajracharya, M.: Visually Realistic Mapping of a Planar Environment with Stereo. In: Experimental Robotics VII. LNCIS, vol. 271, pp. 521–532 (2001)Google Scholar
  14. 14.
    Bouchafa, S., Patri, A., Zavidovique, B.: Efficient Plane Detection From a Single Moving Camera. In: International Conference on Image Processing, pp. 3493–3496 (2009)Google Scholar
  15. 15.
    Labayrade, R., Aubert, D., Tarel, J.-P.: Real-Time Obstacle Detection on Non-Flat Road Geometry Through V-Disparity Representation. In: Intelligent Vehicles Symposium, pp. 646–651 (2002)Google Scholar
  16. 16.
    Gruyer, D., Royere, C., du Lac, N., Michel, G., Blosseville, J.-M.: SiVIC and RTMaps, interconnected platforms for the conception and the evaluation of driving assistance systems. In: IEEE Conference on Intelligent Transportation Systems (2006)Google Scholar
  17. 17.
    Le Besnerais, G., Champagnat, F.: Dense Optical Flow by Iterative Local Window Registration. In: IEEE International Conference on Image Processing, vol. 1, pp. 137–140 (2005)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Adrien Bak
    • 1
  • Samia Bouchafa
    • 1
  • Didier Aubert
    • 2
  1. 1.Institut d’Electronique Fondamentale, Université Paris-SudOrsayFrance
  2. 2.IFSTTARParisFrance

Personalised recommendations