Advertisement

Less Is More: Coded Computational Photography

  • Ramesh Raskar
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4843)

Abstract

Computational photography combines plentiful computing, digital sensors, modern optics, actuators, and smart lights to escape the limitations of traditional cameras, enables novel imaging applications and simplifies many computer vision tasks. However, a majority of current Computational Photography methods involve taking multiple sequential photos by changing scene parameters and fusing the photos to create a richer representation. The goal of Coded Computational Photography is to modify the optics, illumination or sensors at the time of capture so that the scene properties are encoded in a single (or a few) photographs. We describe several applications of coding exposure, aperture, illumination and sensing and describe emerging techniques to recover scene parameters from coded photographs.

Keywords

High Dynamic Range Microlens Array High Dynamic Range Image Digital Sensor Optical Heterodyne 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Raskar, R., Tan, K., Feris, R., Yu, J., Turk, M.: Non-photorealistic Camera: Depth Edge Detection and Stylized Rendering Using a Multi-Flash Camera. SIGGRAPH 2004  (2004)Google Scholar
  2. T umblin, J., Agrawal, A., Raskar, R.: Why I want a Gradient Camera. In: CVPR 2005, IEEE, Los Alamitos (2005)Google Scholar
  3. Raskar, R., Agrawal, A., Tumblin, J.: Coded exposure photography: motion deblurring using fluttered shutter. ACM Trans. Graph 25(3), 795–804 (2006)CrossRefGoogle Scholar
  4. Veeraraghavan, A., Raskar, R., Agrawal, A., Mohan, A., Tumblin, J.: Dappled Photography: Mask-Enhanced Cameras for Heterodyned Light Fields and Coded Aperture Refocusing. ACM Siggraph  (2007)Google Scholar
  5. Nayar, S.K., Narasimhan, S.G.: Assorted Pixels: Multi-Sampled Imaging With Structural Models. In: ECCV. Europian Conference on Computer Vision, vol. IV, pp. 636–652 (2002)Google Scholar
  6. Debevec, Malik.: Recovering high dynamic range radiance maps from photographs. In: Proc. SIGGRAPH (1997)Google Scholar
  7. Mann, Picard.: Being ’undigital’ with digital cameras: Extending dynamic range by combining differently exposed pictures. In: Proc. IS&T 46th ann. conference (1995)Google Scholar
  8. McGuire, M., Matusik, Pfister, Hughes, Durand.: Defocus Video Matting, ACM Transactions on Graphics. Proceedings of ACM SIGGRAPH 2005 24(3) (2005)Google Scholar
  9. Adelson, E.H., Wang, J.Y.A.: Single Lens Stereo with a Plenoptic Camera. IEEE Transactions on Pattern Analysis and Machine Intelligence 14(2) (1992)Google Scholar
  10. Ng, R.: Fourier Slice Photography, SIGGRAPH (2005)Google Scholar
  11. Morimura. Imaging method for a wide dynamic range and an imaging device for a wide dynamic range. U.S. Patent 5455621 (October 1993)Google Scholar
  12. Levoy, M., Hanrahan, P.: Light field rendering. In: SIGGRAPH, pp. 31–42 (1996)Google Scholar
  13. Dowski Jr., E.R., Cathey, W.T.: Extended depth of field through wave-front coding. Applied Optics 34(11), 1859–1866 (1995)CrossRefGoogle Scholar
  14. Georgiev, T., Zheng, C., Nayar, S., Salesin, D., Curless, B., Intwala, C.: Spatio-angular Resolution Trade-Offs in Integral Photography. In: proceedings, EGSR 2006 (2006)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Ramesh Raskar
    • 1
  1. 1.Mitsubishi Electric Research Labs (MERL), Cambridge, MAUSA

Personalised recommendations