Wide Base Stereo with Fisheye Optics: A Robust Approach for 3D Reconstruction in Driving Assistance

  • Jose EsparzaEmail author
  • Michael Helmle
  • Bernd Jähne
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8753)


We propose a new approach to achieve 3D environment reconstruction based on automotive surround view systems with fisheye cameras. In particular, we demonstrate that stereo vision techniques can be applied in overlapping areas of adjacent cameras, which are up to 90 degrees per camera pair in the current setup. Lateral limitations are mainly due to the present system configuration and can be extended. No time accumulation is required, therefore the update rate of the range information is given by the frame rate of the imager. We show by means of experimental results that our approach is capable of delivering 3D information from a pair of images under the described configuration.


Stereo Vision Principal Point Epipolar Line Virtual Camera Camera Center 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Abraham, S., Förstner, W.: Fish-eye-stereo calibration and epipolar rectification. ISPRS J. Photogramm. Remote Sens. 59(5), 278–288 (2005)CrossRefGoogle Scholar
  2. 2.
    Barreto, J.P., Araujo, H.: Issues on the geometry of central catadioptric image formation. In: Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2001, vol. 2, pp. II-422. IEEE (2001)Google Scholar
  3. 3.
    Baumberg, A.: Reliable feature matching across widely separated views. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2000, vol. 1, pp. 774–781. IEEE (2000)Google Scholar
  4. 4.
    Bradski, G.: The opencv library. Dr. Dobb’s J. Softw. Tools 25, 120–126 (2000)Google Scholar
  5. 5.
    Calonder, M., Lepetit, V., Strecha, C., Fua, P.: BRIEF: binary robust independent elementary features. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010, Part IV. LNCS, vol. 6314, pp. 778–792. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  6. 6.
    Esparza, J., Vepa, L., Helmle, M., Jaehne, B.: Extrinsic calibration of a 3D laser range finder to a multi-camera system. In: 9. Workshop Fahrerassistenzsysteme, Uni-DAS (2014). ISBN: 3000449558, 9783000449550Google Scholar
  7. 7.
    Faugeras, O.: Three Dimensional Computer Vision: A Geometric Viewpoint. MIT Press, Cambridge (1993)Google Scholar
  8. 8.
    Fusiello, A., Trucco, E., Verri, A.: A compact algorithm for rectification of stereo pairs. Mach. Vis. Appl. 12(1), 16–22 (2000)CrossRefGoogle Scholar
  9. 9.
    Gehrig, S.K.: Large-field-of-view stereo for automotive applications. In: Omnivis 2005, vol. 1 (2005)Google Scholar
  10. 10.
    Geyer, C., Daniilidis, K.: A unifying theory for central panoramic systems and practical implications. In: Vernon, D. (ed.) ECCV 2000. LNCS, vol. 1843, pp. 445–461. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  11. 11.
    Hartley, R., Zisserman, A.: Multiple View Geometry in Computer Vision, vol. 2. Cambridge University Press, Cambridge (2000)zbMATHGoogle Scholar
  12. 12.
    Heng, L., Li, B., Pollefeys, M.: Camodocal: Automatic intrinsic and extrinsic calibration of a rig with multiple generic cameras and odometry. In: 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1793–1800. IEEE (2013)Google Scholar
  13. 13.
    Labayrade, R., Aubert, D., Tarel, J.P.: Real time obstacle detection in stereovision on non flat road geometry through. In: IEEE Intelligent Vehicle Symposium, vol. 2, pp. 646–651. IEEE (2002)Google Scholar
  14. 14.
    Liu, Y.-C., Lin, K.-Y., Chen, Y.-S.: Bird’s-eye view vision system for vehicle surrounding monitoring. In: Sommer, G., Klette, R. (eds.) RobVis 2008. LNCS, vol. 4931, pp. 207–218. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  15. 15.
    Lourakis, M.I., Orphanoudakis, S.C.: Visual detection of obstacles assuming a locally planar ground. In: Chin, R., Pong, T.-C. (eds.) ACCV 1998. LNCS, vol. 1352, pp. 527–534. Springer, Heidelberg (1997)Google Scholar
  16. 16.
    Mei, C., Rives, P.: Single view point omnidirectional camera calibration from planar grids. In: 2007 IEEE International Conference on Robotics and Automation, pp. 3945–3950 (2007)Google Scholar
  17. 17.
    Rosten, E., Drummond, T.: Fusing points and lines for high performance tracking. In: Tenth IEEE International Conference on Computer Vision, ICCV 2005, vol. 2, pp. 1508–1515. IEEE (2005)Google Scholar
  18. 18.
    Scaramuzza, D., Martinelli, A., Siegwart, R.: A toolbox for easily calibrating omnidirectional cameras. In: 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 5695–5701 (2006)Google Scholar
  19. 19.
    Turkowski, K.: Filters for common resampling tasks. In: Glassner, A.S. (ed.) Graphics Gems, pp. 147–165. Academic Press Professional, Inc., San Diego (1990)CrossRefGoogle Scholar
  20. 20.
    Van Ouwerkerk, J.: Image super-resolution survey. Image Vis. Comput. 24(10), 1039–1052 (2006)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  1. 1.Heidelberg Collaboratory for Image ProcessingHeidelbergGermany
  2. 2.Robert Bosch GmbH, Chasis Control Driving AssistanceLeonbergGermany

Personalised recommendations