Skip to main content
Log in

A Practical Approach to 3D Scanning in the Presence of Interreflections, Subsurface Scattering and Defocus

  • Published:
International Journal of Computer Vision Aims and scope Submit manuscript

Abstract

Global or indirect illumination effects such as interreflections and subsurface scattering severely degrade the performance of structured light-based 3D scanning. In this paper, we analyze the errors in structured light, caused by both long-range (interreflections) and short-range (subsurface scattering) indirect illumination. The errors depend on the frequency of the projected patterns, and the nature of indirect illumination. In particular, we show that long-range effects cause decoding errors for low-frequency patterns, whereas short-range effects affect high-frequency patterns.

Based on this analysis, we present a practical 3D scanning system which works in the presence of a broad range of indirect illumination. First, we design binary structured light patterns that are resilient to individual indirect illumination effects using simple logical operations and tools from combinatorial mathematics. Scenes exhibiting multiple phenomena are handled by combining results from a small ensemble of such patterns. This combination also allows detecting any residual errors that are corrected by acquiring a few additional images. Our methods can be readily incorporated into existing scanning systems without significant overhead in terms of capture time or hardware. We show results for several scenes with complex shape and material properties.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20
Fig. 21
Fig. 22
Fig. 23
Algorithm 1
Fig. 24

Similar content being viewed by others

Notes

  1. Global illumination should not be confused with the oft-used “ambient illumination” that is subtracted by capturing image with the structured light source turned off.

  2. In photometric stereo, interreflections result in a shallow but smooth reconstruction (Nayar et al. 1991, 2006). In structured light 3D scanning, interreflections result in local errors.

  3. Strictly speaking, since all binary patterns have step edges, all of them have high spatial frequencies. For the analysis and discussion in this paper, low-frequency patterns implies patterns with thick stripes. Similarly, high-frequency patterns mean patterns with only thin stripes.

  4. The inverse pattern can be generated by subtracting the image from image of the fully lit scene.

  5. For example, pico-projectors are increasingly getting popular for structured light applications in industrial assembly lines. However, due to imperfect optics, they can not resolve patterns with thin stripes, for example, a striped pattern of 2-pixel width.

  6. Errors for the particular case of laser range scanning of translucent materials were analyzed in Godin et al. (2001). Errors due to sensor noise and spatial mis-alignment of projector-camera pixels were analyzed in Trobina (1995).

  7. The color of the incident illumination can be decoded from the image of the illuminated scenes on a per-pixel basis, even for non-white scenes (Caspi et al. 1998). It is not required to assume spatial smoothness or color neutrality of the scene.

  8. Two additional images of the scene, one under all white illumination, and one under all black illumination were acquired to establish the per-pixel intensity thresholds for binarization.

  9. It is relatively easy to generate codes with small maximum stripe-width. For example, we could find 10-bit codes with a maximum stripe-width of 9 pixels by performing a brute-force search. In comparison, conventional Gray codes have a maximum stripe-width of 512 pixels.

  10. Due to imperfect projector optics, insufficient camera/projector resolution or misalignment between projector and camera pixels, the depth results from individual codes might suffer from spatial aliasing. This problem is more pronounced for the high-frequency XOR codes. To prevent aliasing from affecting the final depth estimate, we apply a median filter (typically 3×3 or 5×5) to the individual correspondence maps before performing the consistency check.

  11. We projected only the logical codes in subsequent iterations, thus requiring 82 images in total.

References

  • Aliaga, D. G., & Xu, Y. (2008). Photogeometric structured light: A self-calibrating and multi-viewpoint framework for accurate 3D modeling. In CVPR.

    Google Scholar 

  • Atcheson, B., Ihrke, I., Heidrich, W., Tevs, A., Bradley, D., Magnor, M., & Seidel, H. (2008). Time-resolved 3D capture of nonstationary gas flows. ACM Transactions on Graphics, 27(3).

  • Caspi, D., Kiryati, N., & Shamir, J. (1998). Range imaging with adaptive color structured light. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20.

  • Chandraker, M. K., Kahl, F., & Kriegman, D. J. (2005). Reflections on the generalized bas-relief ambiguity. In CVPR.

    Google Scholar 

  • Chen, T., Lensch, H. P. A., Fuchs, C., & Seidel, H. P. (2007). Polarization and phase-shifting for 3D scanning of translucent objects. In CVPR.

    Google Scholar 

  • Chen, T., Seidel, H.-P., & Lensch, H. P. A. (2008). Modulated phase-shifting for 3D scanning. In CVPR.

    Google Scholar 

  • Couture, V., Martin, N., & Roy, S. (2011). Unstructured light scanning to overcome interreflections. In ICCV.

    Google Scholar 

  • Er, M. C. (1984). On generating the n-ary reflected gray codes. IEEE Transactions on Computers, C-33(8).

  • Goddyn, L., & Gvozdjak, P. (2003). Binary gray codes with long bit runs. The Electronic Journal of Combinatorics.

  • Godin, G., Beraldin, J.-A., Rioux, M., Levoy, M., Cournoyer, L., & Blais, F. (2001). An assessment of laser range measurement of marble surfaces. In Proc. of fifth conference on optical 3D measurement techniques.

    Google Scholar 

  • Gu, J., Nayar, S. K., Grinspun, E., Belhumeur, P. N., & Ramamoorthi, R. (2008). Compressive structured light for recovering inhomogeneous participating media. In ECCV.

    Google Scholar 

  • Gu, J., Kobayashi, T., Gupta, M., & Nayar, S. K. (2011). Multiplexed illumination for scene recovery in the presence of global illumination. In ICCV.

    Google Scholar 

  • Gühring, J. (2001). Dense 3-D surface acquisition by structured light using off-the-shelf components. Videometrics and Optical Methods for 3D Shape Measurement, 4309, 220–231.

    Article  Google Scholar 

  • Gupta, M., & Nayar, S. K. (2012). Micro phase shifting. In CVPR.

    Google Scholar 

  • Gupta, M., Narasimhan, S. G., & Schechner, Y. Y. (2008). On controlling light transport in poor visibility environments. In CVPR.

    Google Scholar 

  • Gupta, M., Tian, Y., Narasimhan, S. G., & Zhang, L. (2009). (De) Focusing on global light transport for active scene recovery. In CVPR.

    Google Scholar 

  • Gupta, M., Agrawal, A., Veeraraghavan, A., & Narasimhan, S. G. (2011). Structured light 3d scanning in the presence of global illumination. In CVPR.

    Google Scholar 

  • Hermans, C., Francken, Y., Cuypers, T., & Bekaert, P. (2009). Depth from sliding projections. In CVPR.

    Google Scholar 

  • Holroyd, M., Lawrence, J., & Zickler, T. (2010). A coaxial optical scanner for synchronous acquisition of 3d geometry and surface reflectance. ACM Transactions on Graphics, 29(3).

  • Horn, E., & Kiryati, N. (1997). Toward optimal structured light patterns. In Proc. international conference on recent advances in 3-D digital imaging and modeling.

    Google Scholar 

  • Ihrke, I., Kutulakos, K. N., Lensch, H. P. A., Magnor, M., & Heidrich, W. (2008). State of the art in transparent and specular object reconstruction. In STAR proceedings of eurographics.

    Google Scholar 

  • Liu, S., Ng, T. T., & Matsushita, Y. (2010). Shape from second-bounce of light transport. In ECCV.

    Google Scholar 

  • Minou, M., Kanade, T., & Sakai, T. (1981). A method of time-coded parallel planes of light for depth measurement. Transactions of IECE Japan, 64(8).

  • Morris, N. J. W., & Kutulakos, K. N. (2007). Reconstructing the surface of inhomogeneous transparent scenes by scatter trace photography. In ICCV.

    Google Scholar 

  • Narasimhan, S. G., Nayar, S. K., Sun, B., & Koppal, S. J. (2005). Structured light in scattering media. In ICCV.

    Google Scholar 

  • Nayar, S. K., Ikeuchi, K., & Kanade, T. (1991). Shape from Interreflections. IJCV, 6(3).

  • Nayar, S. K., Krishnan, G., Grossberg, M. D., & Raskar, R. (2006). Fast separation of direct and global components of a scene using high frequency illumination. ACM Transactions on Graphics, 25(3).

  • Nehab, D., Rusinkiewicz, S., Davis, J., & Ramamoorthi, R. (2005). Efficiently combining positions and normals for precise 3d geometry. ACM Transactions on Graphics, 24(3).

  • Park, J., & Kak, A. C. (2004). Multi-peak range imaging for accurate 3D reconstruction of specular objects. In ACCV.

    Google Scholar 

  • Park, J., & Kak, A. C. (2008). 3D modeling of optically challenging objects. IEEE Transactions on Visualization and Computer Graphics, 14(2).

  • Posdamer, J., & Altschuler, M. (1982). Surface measurement by space-encoded projected beam systems. Computer Graphics and Image Processing, 18(1).

  • Salvi, J., Fernandez, S., Pribanic, T., & Llado, X. (2010). A state of the art in structured light patterns for surface profilometry. Pattern Recognition, 43.

  • Steger, E., & Kutulakos, K. N. (2008). A theory of refractive and specular 3D shape by light-path triangulation. Inernational Journal of Computer Vision, 76(1).

  • Trobina, M. (1995). Error model of a coded-light range sensor. Technical report.

  • Will, P. M., & Pennington, K. S. (1971). Grid coding: A preprocessing technique for robot and machine vision. Artificial Intelligence, 2(3–4).

  • Xu, Y., & Aliaga, D. (2009). An adaptive correspondence algorithm for modeling scenes with strong interreflections. In IEEE Transactions on Visualization and Computer Graphics.

    Google Scholar 

  • Zhang, S. (2005). High-resolution, three-dimensional shape measurement. Ph.D. Thesis, Stony Brook University.

  • Zhang, L., & Nayar, S. K. (2006). Projection Defocus Analysis for Scene Capture and Image Display. ACM Transactions on Graphics, 25(3).

  • Zhang, S., Weide, D. V. D., & Oliver, J. (2010). Superfast phase-shifting method for 3-D shape measurement. Optics Express, 18(9).

Download references

Acknowledgements

We thank Jay Thornton, Joseph Katz, John Barnwell and Haruhisa Okuda (Mitsubishi Electric Japan) for their help and support. Mohit Gupta was partially supported by ONR grant N00014-11-1-0295. Srinivasa Narasimhan was partially supported by NSF grants IIS-0964562 and CAREER IIS-0643628 and a Samsung SAIT GRO grant. Ashok Veeraraghavan was partially supported by NSF Grants IIS-1116718 and CCF-1117939. The authors thank Vincent Chapdelaine-Couture for sharing their data-sets.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mohit Gupta.

Additional information

A preliminary version of this paper appeared in Gupta et al. (2011).

Rights and permissions

Reprints and permissions

About this article

Cite this article

Gupta, M., Agrawal, A., Veeraraghavan, A. et al. A Practical Approach to 3D Scanning in the Presence of Interreflections, Subsurface Scattering and Defocus. Int J Comput Vis 102, 33–55 (2013). https://doi.org/10.1007/s11263-012-0554-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11263-012-0554-3

Keywords

Navigation