Skip to main content

Mask-Specific Inpainting with Deep Neural Networks

  • Conference paper
  • First Online:
Pattern Recognition (GCPR 2014)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 8753))

Included in the following conference series:

Abstract

Most inpainting approaches require a good image model to infer the unknown pixels. In this work, we directly learn a mapping from image patches, corrupted by missing pixels, onto complete image patches. This mapping is represented as a deep neural network that is automatically trained on a large image data set. In particular, we are interested in the question whether it is helpful to exploit the shape information of the missing regions, i.e. the masks, which is something commonly ignored by other approaches. In comprehensive experiments on various images, we demonstrate that our learning-based approach is able to use this extra information and can achieve state-of-the-art inpainting results. Furthermore, we show that training with such extra information is useful for blind inpainting, where the exact shape of the missing region might be uncertain, for instance due to aliasing effects.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bertalmio, M., Bertozzi, A.L., Sapiro, G.: Navier-stokes, fluid dynamics, and image and video inpainting. In: 2001 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. I-355. IEEE (2001)

    Google Scholar 

  2. Bertalmio, M., Sapiro, G., Caselles, V., Ballester, C.: Image inpainting. In: Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques, pp. 417–424. ACM Press/Addison-Wesley Publishing Co. (2000)

    Google Scholar 

  3. Burger, H.C., Schuler, C.J., Harmeling, S.: Image denoising: can plain neural networks compete with bm3d? In: 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2392–2399. IEEE (2012)

    Google Scholar 

  4. Chan, T.F., Shen, J.: Nontexture inpainting by curvature-driven diffusions. J. Vis. Commun. Image Represent. 12(4), 436–449 (2001)

    Article  Google Scholar 

  5. Criminisi, A., Pérez, P., Toyama, K.: Region filling and object removal by exemplar-based image inpainting. IEEE Trans. Image Process. 13(9), 1200–1212 (2004)

    Article  Google Scholar 

  6. Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: Imagenet: a large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 248–255. IEEE (2009)

    Google Scholar 

  7. Drori, I., Cohen-Or, D., Yeshurun, H.: Fragment-based image completion. ACM Trans. Graph. (TOG) - Proc. ACM SIGGRAPH 22(3), 303–312 (2003)

    Article  Google Scholar 

  8. Efros, A.A., Leung, T.K.: Texture synthesis by non-parametric sampling. In: The Proceedings of the Seventh IEEE International Conference on Computer Vision, vol. 2, pp. 1033–1038. IEEE (1999)

    Google Scholar 

  9. Elad, M., Starck, J.L., Querre, P., Donoho, D.L.: Simultaneous cartoon and texture image inpainting using morphological component analysis (MCA). Appl. Comput. Harmon. Anal. 19(3), 340–358 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  10. Erhan, D., Courville, A., Bengio, Y.: Understanding representations learned in deep architectures. Technical report, Technical Report 1355, Université de Montréal/DIRO (2010)

    Google Scholar 

  11. Fadili, M.J., Starck, J.L., Murtagh, F.: Inpainting and zooming using sparse representations. Comput. J. 52(1), 64–79 (2009)

    Article  Google Scholar 

  12. LeCun, Y.A., Bottou, L., Orr, G.B., Müller, K.-R.: Efficient BackProp. In: Orr, G.B., Müller, K.-R. (eds.) NIPS-WS 1996. LNCS, vol. 1524, pp. 9–50. Springer, Heidelberg (1998)

    Google Scholar 

  13. Liu, Y., Caselles, V.: Exemplar-based image inpainting using multiscale graph cuts. IEEE Trans. Image Process. 22(5), 1699–1711 (2013)

    Article  MathSciNet  Google Scholar 

  14. Mairal, J., Elad, M., Sapiro, G.: Sparse representation for color image restoration. IEEE Trans. Image Process. 17(1), 53–69 (2008)

    Article  MathSciNet  Google Scholar 

  15. Martin, D., Fowlkes, C., Tal, D., Malik, J.: A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In: Proceedings of 8th International Conference on Computer Vision, vol. 2, pp. 416–423, July 2001

    Google Scholar 

  16. Masnou, S., Morel, J.M.: Level lines based disocclusion. In: Proceedings of the 1998 International Conference on Image Processing, ICIP 98, pp. 259–263. IEEE (1998)

    Google Scholar 

  17. Pérez, P., Gangnet, M., Blake, A.: Patchworks: example-based region tiling for image editing. Microsoft Research, Redmond, WA, Technical report MSR-TR-2004-04, pp. 1–8 (2004)

    Google Scholar 

  18. Roth, S., Black, M.J.: Fields of experts: a framework for learning image priors. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 860–867. IEEE (2005)

    Google Scholar 

  19. Rumelhart, D.E., Hintont, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature 323(6088), 533–536 (1986)

    Article  Google Scholar 

  20. Schmidt, U., Gao, Q., Roth, S.: A generative perspective on MRFs in low-level vision. In: 2010 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1751–1758. IEEE (2010)

    Google Scholar 

  21. Schuler, C.J., Burger, H.C., Harmeling, S., Scholkopf, B.: A machine learning approach for non-blind image deconvolution. In: 2013 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1067–1074. IEEE (2013)

    Google Scholar 

  22. Telea, A.: An image inpainting technique based on the fast marching method. J. Graph. Tools 9(1), 23–34 (2004)

    Article  Google Scholar 

  23. Wexler, Y., Shechtman, E., Irani, M.: Space-time completion of video. IEEE Trans. Pattern Anal. Mach. Intell. 29(3), 463–476 (2007)

    Article  Google Scholar 

  24. Wong, A., Orchard, J.: A nonlocal-means approach to exemplar-based inpainting. In: 15th IEEE International Conference on Image Processing, ICIP 2008, pp. 2600–2603. IEEE (2008)

    Google Scholar 

  25. Xie, J., Xu, L., Chen, E.: Image denoising and inpainting with deep neural networks. In: NIPS, pp. 350–358 (2012)

    Google Scholar 

  26. Xu, Z., Sun, J.: Image inpainting by patch propagation using patch sparsity. IEEE Trans. Image Process. 19(5), 1153–1165 (2010)

    Article  MathSciNet  Google Scholar 

  27. Zeiler, M.D.: Adadelta: an adaptive learning rate method. arXiv preprint arXiv:1212.5701 (2012)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Rolf Köhler .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer International Publishing Switzerland

About this paper

Cite this paper

Köhler, R., Schuler, C., Schölkopf, B., Harmeling, S. (2014). Mask-Specific Inpainting with Deep Neural Networks. In: Jiang, X., Hornegger, J., Koch, R. (eds) Pattern Recognition. GCPR 2014. Lecture Notes in Computer Science(), vol 8753. Springer, Cham. https://doi.org/10.1007/978-3-319-11752-2_43

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-11752-2_43

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-11751-5

  • Online ISBN: 978-3-319-11752-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics