Automated Removal of Partial Occlusion Blur

  • Scott McCloskey
  • Michael Langer
  • Kaleem Siddiqi
Part of the Lecture Notes in Computer Science book series (LNCS, volume 4843)


This paper presents a novel, automated method to remove partial occlusion from a single image. In particular, we are concerned with occlusions resulting from objects that fall on or near the lens during exposure. For each such foreground object, we segment the completely occluded region using a geometric flow. We then look outward from the region of complete occlusion at the segmentation boundary to estimate the width of the partially occluded region. Once the area of complete occlusion and width of the partially occluded region are known, the contribution of the foreground object can be removed. We present experimental results which demonstrate the ability of this method to remove partial occlusion with minimal user interaction. The result is an image with improved visibility in partially occluded regions, which may convey important information or simply improve the image’s aesthetics.


Complete Occlusion Foreground Object Partial Occlusion Occlude Object Background Object 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Chuang, Y., Curless, B., Salesin, D.H., Szeliski, R.: A Bayesian Approach to Digital Matting. In: CVPR 2001, pp. 264–271 (2001)Google Scholar
  2. 2.
    Debevec, P., Malik, J.: Recovering High Dynamic Range Radiance Maps from Photographs. In: SIGGRAPH 1997, pp. 369–378 (1997)Google Scholar
  3. 3.
    Favaro, P., Soatto, S.: Seeing Beyond Occlusions (and other marvels of a finite lens aperture). In: CVPR 2003, pp. 579–586 (2003)Google Scholar
  4. 4.
    Grayson, M.: The Heat Equation Shrinks Embedded Plane Curves to Round Points. Journal of Differential Geometry 26, 285–314 (1987)zbMATHMathSciNetGoogle Scholar
  5. 5.
    Levoy, M., Chen, B., Vaish, V., Horowitz, M., McDowall, I., Bolas, M.: Synthetic Aperture Confocal Imaging. In: SIGGRAPH 2004, pp. 825–834 (2004)Google Scholar
  6. 6.
    Perona, P., Malik, J.: Scale-Space and Edge Detection using Anisotropic Diffusion. IEEE Trans. on Patt. Anal. and Mach. Intell. 12(7), 629–639 (1990)CrossRefGoogle Scholar
  7. 7.
    McCloskey, S., Langer, M., Siddiqi, K.: Seeing Around Occluding Objects. In: Proc. of the Int. Conf. on Patt. Recog. vol. 1, pp. 963–966 (2006)Google Scholar
  8. 8.
    McGuire, M., Matusik, W., Pfister, H., Hughes, J.F., Durand, F.: Defocus Video Matting. ACM Trans. Graph. 24(3) (2005)Google Scholar
  9. 9.
    Reinhard, E., Khan, E.A.: Depth-of-field-based Alpha-matte Extraction. In: Proc. 2nd Symp. on Applied Perception in Graphics and Visualization 2005, pp. 95–102 (2005)Google Scholar
  10. 10.
    Sethian, J.A.: Level Set Methods and Fast Marching Methods. Cambridge University Press, Cambridge (1999)zbMATHGoogle Scholar
  11. 11.
    Sun, J., Jia, J., Tang, C., Shum, H.: Poisson Matting. ACM Trans. Graph. 23(3) (2004)Google Scholar
  12. 12.
    Tamaki, T., Suzuki, H.: String-like Occluding Region Extraction for Background Restoration. In: Proc. of the Int. Conf. on Patt. Recog. vol. 3, pp. 615–618 (2006)Google Scholar
  13. 13.
    Vasilevskiy, A., Siddiqi, K.: Flux Maximizing Geometric Flows. IEEE Trans. on Patt. Anal. and Mach. Intell. 24(12), 1565–1578 (2002)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Scott McCloskey
    • 1
  • Michael Langer
    • 1
  • Kaleem Siddiqi
    • 1
  1. 1.Centre for Intelligent Machines, McGill University 

Personalised recommendations