Image blurring effects due to depth discontinuitites: Blurring that creates emergent image details

  • Thang C. Nguyen
  • Thomas S. Huang
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 588)


A new model (called multi-component blurring or MCB) to account for image blurring effects due to depth discontinuities is presented. We show that blurring processes operating in the vicinity of large depth discontinuities can give rise to emergent image details, quite distinguishable but nevertheless un-explained by previously available blurring models. In other words, the maximum principle for scale space [Per90] does not hold. It is argued that blurring in high-relief 3-D scenes should be more accurately modeled as a multi-component process. We present results form extensive and carefully designed experiments, with many images of real scenes taken by a CCD camera with typical parameters. These results have consistently support our new blurring model. Due care was taken to ensure that the image phenomena observed are mainly due to de-focussing and not due to mutual illuminations [For89], specularity [Hea87], objects' “finer” structures, coherent diffraction, or incidental image noises. [Gla88] We also hypothesize on the role of blurring on human depth-from-blur perception, based on correlation with recent results from human blur perception. [Hes89]


Multi-component image blurring (MCB) depth-from-blur point-spread functions (kernels) incoherent imaging of 3-D scenes human blur perception active vision 


  1. [Che88]
    Chen, Y. C., “Synthetic Image Generation for Highly Defocused Scenes”, Recent Advances in Computer Graphics, Springer-Verlag, 1988, pp. 117–125.Google Scholar
  2. [Ens91]
    Ens, J., and Lawrence, P., “A Matrix Based Method for Determining Depth from Focus”, Proc. Computer Vision and Pattern Recognition 1991, pp. 600–606.Google Scholar
  3. [For89]
    Forsyth, D., and Zisserman, A., “Mutual Illuminations”, Proc. Computer Vision and Pattern Recognition, 1989, California, USA, pp. 466–473.Google Scholar
  4. [Fri67]
    Frieden, B., “Optical Transfer of Three Dimensional Object”, Journal of the Optical Society of America, Vol. 57, No. 1, 1967, pp. 56–66.Google Scholar
  5. [Gar87]
    Garibotto, G. and Storace, P. “3-D Range Estimate from the Focus Sharpness of Edges”, Proc. of the 4th Intl. Conf. on Image Analysis and Processing (1987), Palermo, Italy, Vol. 2, pp. 321–328.Google Scholar
  6. [Gha78]
    Ghatak, A. and Thyagarajan, K., Contemporary Optics, Plenum Press, New York, 1978.Google Scholar
  7. [Gla88]
    Glasser, J., Vaillant, J., Chazallet, F., “An Accurate Method for Measuring the Spatial Resolution of Integrated Image Sensor”, Proc. SPIE Vol. 1027 Image Processing II, 1988, pp. 40–47.Google Scholar
  8. [Gro87]
    Grossman, P., “Depth from Focus”, Pattern Recognition Letters, 5, 1987, pp. 63–69.Google Scholar
  9. [Hea87]
    Healey, G. and Bindford, T., “Local Shape from Specularity”, Proc. of the 1st Intl. Conf. on Computer Vision (ICCV'87), London, UK, (1987), pp. 151–160.Google Scholar
  10. [Hes89]
    Hess, R. F., Pointer, J. S., and R. J. Watt, “How are spatial filters used in fovea and parafovea?”, Journal of the Optical Society of America, A/Vol. 6, No. 2, Feb. 1989, pp. 329–339.PubMedGoogle Scholar
  11. [Hum85]
    Hummel, R., Kimia, B. and Zucker, S., “Gaussian Blur and the Heat Equation: Forward and Inverse Solution”, Proc. Computer Vision and Pattern Recognition, 1985, pp. 668–671.Google Scholar
  12. [Kro89]
    Krotkov, E. P., Active Computer Vision by Cooperative Focus and Stereo, Springer-Verlag, 1989, pp. 19–41.Google Scholar
  13. [Lev85]
    Levine, M., Vision in Man and Machine, McGraw-Hill, 1985, pp. 220–224.Google Scholar
  14. [Ngu90a]
    Nguyen, T. C., and Huang, T. S., Image Blurring Effects Due to Depth Discontinuities”, Technical Note ISP-1080, University of Illinois, May 1990.Google Scholar
  15. [Ngu90b]
    Nguyen, T. C., and Huang, T. S., “Image Blurring Effects Due to Depth Discontinuities”, Proc. Image Understanding Workshop, 1990, pp. 174–178.Google Scholar
  16. [Per90]
    Perona, P. and Malik, J., “Scale-space and Edge Detection using Anisotropic Diffusion”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. PAMI-12, No. 7, July 1990, pp. 629–639.Google Scholar
  17. [Pen87]
    Pentland, A., “A New Sense for Depth of Field”, IEEE Trans. on Pattern Recognition and Machine Intelligence, Vol. PAMI-9, No. 4 (1987), pp. 523–531.Google Scholar
  18. [Pen89]
    Pentland, A., Darrell, T., Turk, M., and Huang, W., “A Simple, Real-time Range Camera”, Proc. Computer Vision and Pattern Recognition, 1989, pp. 256–261.Google Scholar
  19. [Sub88a]
    Subbarao, M., “Parallel Depth Recovery by Changing Camera Parameters, Proc. of the 2nd Intl. Conf. on Computer Vision, 1988, pp. 149–155.Google Scholar
  20. [Sub88b]
    Subbarao, M., “Parallel Depth Recovery from Blurred Edges”, Proc. Computer Vision and Pattern Recognition, Ann Arbor, June 1988, pp. 498–503.Google Scholar
  21. [TI86]
    Texas Instruments Inc., Advanced Information Document for TI Imaging Sensor TC241, Texas, August 1986.Google Scholar
  22. [Wat83]
    Watt, R. J., and Morgan M. J., “The Recognition and Representation of Edge Blur: Evidence for Spatial Primitives in Human Vision”, Vision Research, Vol. 23, No. 12, 1983, pp. 1465–1477.PubMedGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1992

Authors and Affiliations

  • Thang C. Nguyen
    • 1
  • Thomas S. Huang
    • 1
  1. 1.Beckman Institute and Coordinated Science LaboratoryUrbanaUSA

Personalised recommendations