Advertisement

Analyzing Depth from Coded Aperture Sets

  • Anat Levin
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6311)

Abstract

Computational depth estimation is a central task in computer vision and graphics. A large variety of strategies have been introduced in the past relying on viewpoint variations, defocus changes and general aperture codes. However, the tradeoffs between such designs are not well understood. Depth estimation from computational camera measurements is a highly non-linear process and therefore most research attempts to evaluate depth estimation strategies rely on numerical simulations. Previous attempts to design computational cameras with good depth discrimination optimized highly non-linear and non-convex scores, and hence it is not clear if the constructed designs are optimal. In this paper we address the problem of depth discrimination from J images captured using J arbitrary codes placed within one fixed lens aperture. We analyze the desired properties of discriminative codes under a geometric optics model and propose an upper bound on the best possible discrimination. We show that under a multiplicative noise model, the half ring codes discovered by Zhou et al. [1] are near-optimal. When a large number of images are allowed, a multi-aperture camera [2] dividing the aperture into multiple annular rings provides near-optimal discrimination. In contrast, the plenoptic camera of [5] which divides the aperture into compact support circles can achieve at most 50% of the optimal discrimination bound.

Keywords

Multiplicative Noise Discrimination Score Derivative Power Optical Transfer Function Lens Aperture 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Zhou, C., Lin, S., Nayar, S.K.: Coded Aperture Pairs for Depth from Defocus. In: ICCV (2009)Google Scholar
  2. 2.
    Green, P., Sun, W., Matusik, W., Durand, F.: Multi-aperture photography. In: SIGGRAPH (2007)Google Scholar
  3. 3.
    Scharstein, D., Szeliski, R.: A taxonomy and evaluation of dense two-frame stereo correspondence algorithms. In: IJCV (2002)Google Scholar
  4. 4.
    Adelson, E., Wang, J.: Single lens stereo with a plenoptic camera. In: IEEE PAMI (1992)Google Scholar
  5. 5.
    Georgeiv, T., Zheng, K., Curless, B., Salesin, D., Nayar, S., Intwala, C.: Spatio-angular resolution tradeoffs in integral photography. In: EGSR (2006)Google Scholar
  6. 6.
    Chaudhuri, S., Rajagopalan, A.: Depth from defocus: A real aperture imaging approach. Springer, New York (1999)Google Scholar
  7. 7.
    Veeraraghavan, A., Raskar, R., Agrawal, A., Mohan, A., Tumblin, J.: Dappled photography: Mask-enhanced cameras for heterodyned light fields and coded aperture refocusing. In: SIGGRAPH (2007)Google Scholar
  8. 8.
    Levin, A., Fergus, R., Durand, F., Freeman, W.: Image and depth from a conventional camera with a coded aperture. In: SIGGRAPH (2007)Google Scholar
  9. 9.
    Dowski, E., Cathey, W.: Single-lens single-image incoherent passive-ranging systems. App. Opt. (1994)Google Scholar
  10. 10.
    Levin, A., Hasinoff, S., Green, P., Durand, F., Freeman, W.: 4D frequency analysis of computational cameras for depth of field extension. In: SIGGRAPH (2009)Google Scholar
  11. 11.
    Schechner, Y., Kiryati, N.: Depth from defocus vs. stereo: How different really are they. In: IJCV (2000)Google Scholar
  12. 12.
    Farid, H., Simoncelli, E.: Range estimation by optical differentiation. In: JOSA (1998)Google Scholar
  13. 13.
    Vaish, V., Levoy, M., Szeliski, R., Zitnick, C., Kang, S.: Reconstructing occluded surfaces using synthetic apertures: Stereo, focus and robust measures. In: CVPR (2006)Google Scholar
  14. 14.
    Levin, A., Freeman, W., Durand, F.: Understanding camera trade-offs through a Bayesian analysis of light field projections. In: Forsyth, D., Torr, P., Zisserman, A. (eds.) ECCV 2008, Part IV. LNCS, vol. 5305, pp. 88–101. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  15. 15.
    Hasinoff, S., Kutulakos, K.: Light-efficient photography. In: Forsyth, D., Torr, P., Zisserman, A. (eds.) ECCV 2008, Part IV. LNCS, vol. 5305, pp. 45–59. Springer, Heidelberg (2008)CrossRefGoogle Scholar
  16. 16.
    Hasinoff, S., Kutulakos, K., Durand, F., Freeman, W.: Time-constrained photography. In: ICCV (2009)Google Scholar
  17. 17.
    Zhou, C., Nayar, S.K.: What are Good Apertures for Defocus Deblurring? In: IEEE International Conference on Computational Photography (2009)Google Scholar
  18. 18.
    Ng, R.: Fourier slice photography. In: SIGGRAPH (2005)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2010

Authors and Affiliations

  • Anat Levin
    • 1
  1. 1.Dep. of Computer Science and Applied MathThe Weizmann Institute of Science 

Personalised recommendations