Confocal Stereo

  • Samuel W. Hasinoff
  • Kiriakos N. Kutulakos
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3951)


We present confocal stereo, a new method for computing 3D shape by controlling the focus and aperture of a lens. The method is specifically designed for reconstructing scenes with high geometric complexity or fine-scale texture. To achieve this, we introduce the confocal constancy property, which states that as the lens aperture varies, the pixel intensity of a visible in-focus scene point will vary in a scene-independent way, that can be predicted by prior radiometric lens calibration. The only requirement is that incoming radiance within the cone subtended by the largest aperture is nearly constant. First, we develop a detailed lens model that factors out the distortions in high resolution SLR cameras (12MP or more) with large-aperture lenses (e.g., f1.2). This allows us to assemble an A × F aperture-focus image (AFI) for each pixel, that collects the undistorted measurements over all A apertures and F focus settings. In the AFI representation, confocal constancy reduces to color comparisons within regions of the AFI, and leads to focus metrics that can be evaluated separately for each pixel. We propose two such metrics and present initial reconstruction results for complex scenes.


Image Alignment Outgoing Radiance Scene Point Sensor Plane Focus Setting 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Zitnick, C.L., Kang, S.B., Uyttendaele, M., Winder, S., Szeliski, R.: High-quality video view interpolation using a layered representation. In: SIGGRAPH, pp. 600–608 (2004)Google Scholar
  2. 2.
    Fitzgibbon, A., Wexler, Y., Zisserman, A.: Image-based rendering using image-based priors. IJCV 63, 141–151 (2005)CrossRefGoogle Scholar
  3. 3.
    Hertzmann, A., Seitz, S.M.: Example-based photometric stereo: Shape reconstruction with general, varying BRDFs. PAMI 27, 1254–1264 (2005)CrossRefGoogle Scholar
  4. 4.
    Wei, Y., Ofek, E., Quan, L., Shum, H.Y.: Modeling hair from multiple views. In: SIGGRAPH, pp. 816–820 (2005)Google Scholar
  5. 5.
    Darrell, T., Wohn, K.: Pyramid based depth from focus. In: CVPR, pp. 504–509 (1988)Google Scholar
  6. 6.
    Nayar, S., Watanabe, M., Noguchi, M.: Real-time focus range sensor. PAMI 18, 1186–1198 (1996)CrossRefGoogle Scholar
  7. 7.
    Favaro, P., Soatto, S.: Learning shape from defocus. In: Heyden, A., Sparr, G., Nielsen, M., Johansen, P. (eds.) ECCV 2002. LNCS, vol. 2351, pp. 735–745. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  8. 8.
    Favaro, P., Osher, S., Soatto, S., Vese, L.A.: 3D shape from anisotropic diffusion. In: CVPR, vol. (1), pp. 179–186 (2003)Google Scholar
  9. 9.
    Pentland, A.P.: A new sense for depth of field. PAMI 9, 523–531 (1987)CrossRefGoogle Scholar
  10. 10.
    Subbarao, M., Surya, G.: Depth from defocus: A spatial domain approach. IJCV 13, 271–294 (1994)CrossRefGoogle Scholar
  11. 11.
    Farid, H., Simoncelli, E.P.: Range estimation by optical differentiation. J. Optical Society of America, A 15, 1777–1786 (1998)CrossRefGoogle Scholar
  12. 12.
    Watanabe, M., Nayar, S.K.: Rational filters for passive depth from defocus. IJCV 27, 203–225 (1998)CrossRefGoogle Scholar
  13. 13.
    Isaksen, A., McMillan, L., Gortler, S.J.: Dynamically reparameterized light fields. In: SIGGRAPH, pp. 297–306 (2000)Google Scholar
  14. 14.
    Levoy, M., Chen, B., Vaish, V., Horowitz, M., McDowall, I., Bolas, M.T.: Synthetic aperture confocal imaging. In: SIGGRAPH, pp. 825–834 (2004)Google Scholar
  15. 15.
    Favaro, P., Soatto, S.: Seeing beyond occlusions (and other marvels of a finite lens aperture). In: CVPR, vol. (2), pp. 579–586 (2003)Google Scholar
  16. 16.
    Debevec, P., Malik, J.: Recovering high dynamic range radiance maps from photographs. In: SIGGRAPH, pp. 369–378 (1997)Google Scholar
  17. 17.
    Smith, W.J.: Modern Optical Engineering, 3rd edn. McGraw-Hill, New York (2000)Google Scholar
  18. 18.
    Asada, N., Fujiwara, H., Matsuyama, T.: Seeing behind the scene: Analysis of photometric properties of occluding edges by the reversed projection blurring model. PAMI 20, 155–167 (1998)CrossRefGoogle Scholar
  19. 19.
    Schechner, Y.Y., Kiryati, N.: Depth from defocus vs. stereo: How different really are they? IJCV 39, 141–162 (2000)CrossRefzbMATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Samuel W. Hasinoff
    • 1
  • Kiriakos N. Kutulakos
    • 1
  1. 1.Dept. of Computer ScienceUniversity of TorontoCanada

Personalised recommendations