Skip to main content
Log in

Deep Learning-Based Image and Video Inpainting: A Survey

  • Published:
International Journal of Computer Vision Aims and scope Submit manuscript

Abstract

Image and video inpainting is a classic problem in computer vision and computer graphics, aiming to fill in the plausible and realistic content in the missing areas of images and videos. With the advance of deep learning, this problem has achieved significant progress recently. The goal of this paper is to comprehensively review the deep learning-based methods for image and video inpainting. Specifically, we sort existing methods into different categories from the perspective of their high-level inpainting pipeline, present different deep learning architectures, including CNN, VAE, GAN, diffusion models, etc., and summarize techniques for module design. We review the training objectives and the common benchmark datasets. We present evaluation metrics for low-level pixel and high-level perceptional similarity, conduct a performance evaluation, and discuss the strengths and weaknesses of representative inpainting methods. We also discuss related real-world applications. Finally, we discuss open challenges and suggest potential future research directions.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17

Similar content being viewed by others

References

  • Arjovsky, M., Chintala, S., & Bottou, L. (2017). Wasserstein Generative Adversarial Networks. Int. Conf. Mach. Learn.,70, 214–223.

    Google Scholar 

  • Austin, J., Johnson, D. D., Ho, J., Tarlow, D., & van den Berg, R. (2021). Structured Denoising Diffusion Models in Discrete State-Spaces. Adv. Neural Inform. Process. Syst.,34, 17981–17993.

    Google Scholar 

  • Avrahami, O., Lischinski, D., & Fried, O. (2022). Blended Diffusion for Text-Driven Editing of Natural Images. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 18208-18218).

  • Ballester, C., Bertalmio, M., Caselles, V., Sapiro, G., & Verdera, J. (2001). Filling-in by joint interpolation of vector fields and gray levels. IEEE Trans Image Process, 10(8), 1200–1211.

    Article  MathSciNet  Google Scholar 

  • Baluja, S., Marwood, D., Johnston, N., Covell, M. (2019). Learning to render better image previews. In I2019 IEEE International Conference on Image Processing (ICIP), (pp. 1700-1704). IEEE.

  • Barnes, C., Shechtman, E., Finkelstein, A., & Goldman, D. B. (2009). PatchMatch: A randomized correspondence algorithm for structural image editing. ACM Trans Graph, 28(3), 24.

    Article  Google Scholar 

  • Bertalmio, M., Sapiro, G., Caselles, V., & Ballester, C. (2000). Image inpainting. In Proceedings ACM SIGGRAPH, pp. 417–424

  • Bian, X., Wang, C., Quan, W., Ye, J., Zhang, X., & Yan, D. M. (2022). Scene text removal via cascaded text stroke detection and erasing. Computational Visual Media, 8, 273–287.

    Article  Google Scholar 

  • Blau, Y., & Michaeli, T. (2018). The perception-distortion tradeoff. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 6228-6237).

  • Canny, J. (1986). A computational approach to edge detection. IEEE Trans Pattern Anal Mach Intell, 6, 679–698.

    Article  Google Scholar 

  • Cao, C., & Fu, Y. (2021). Learning a sketch tensor space for image inpainting of man-made scenes. In Proceedings of the IEEE/CVF international conference on computer vision, (pp. 14509–14518)

  • Cao, C., Dong, Q., Fu, Y. (2022). Learning prior feature and attention enhanced image inpainting. In European conference on computer vision

  • Carlsson, S. (1988). Sketch based coding of grey level images. Sign Process, 15(1), 57–83.

    Article  Google Scholar 

  • Carreira, J., Zisserman, A. (2017). Quo vadis, action recognition? A new model and the kinetics dataset. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4724–4733)

  • Chang, Y. L, Liu, Z. Y., Lee, K. Y., & Hsu, W. (2019a). Free-form video inpainting with 3d gated convolution and temporal patchgan. In International conference on computer vision, (pp. 9066–9075)

  • Chang, Y. L., Liu, Z. Y., Lee, K. Y., & Hsu, W. (2019b). Learnable gated temporal shift module for deep video inpainting. In The British Machine vision conference

  • Chang, Y. L., Yu Liu, Z., & Hsu, W. (2019). Vornet: Spatio-temporally consistent video inpainting for object removal. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition workshops

  • Chen, C., Cai, J., Hu, Y., Tang, X., Wang, X., Yuan, C., & Bai, S. (2021). Deep interactive video inpainting: An invisibility cloak for harry potter. In Proceedings of the 29th ACM international conference on multimedia (pp. 862-870).

  • Chen, L., Zhang, H., Xiao, J., Nie, L., Shao, J., Liu, W., & Chua, T. S. (2017). Sca-cnn: Spatial and channel-wise attention in convolutional networks for image captioning. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 5659-5667).

  • Chen, P. (2018). Video retouch: Object removal. http://www.12371.cn/2021/02/08/ARTI1612745858192472.shtml

  • Chen, T., Lucic, M., Houlsby, N., & Gelly, S. (2018). On self modulation for generative adversarial networks. In International conference on learning representations

  • Chi, L., Jiang, B., & Mu, Y. (2020). Fast Fourier Convolution. Adv. Neural Inform. Process. Syst., 33, 4479–4488.

    Google Scholar 

  • Chu, P., Quan, W., Wang, T., Wang, P., Ren, P., & Yan23, D. M. (2021). Deep Video Decaptioning. In Proceedings of the British machine vision conference

  • Chung, H., Sim, B., & Ye, J. C. (2022). Come-closer-diffuse-faster: Accelerating conditional diffusion models for inverse problems through stochastic contraction. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 12413-12422).

  • Cimpoi, M., Maji, S., Kokkinos, I., Mohamed, S., & Vedaldi, A. (2014). Describing textures in the wild. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3606-3613).

  • Criminisi, A., Perez, P., & Toyama, K. (2004). Region filling and object removal by exemplar-based image inpainting. IEEE Trans Image Process, 13(9), 1200–1212.

    Article  Google Scholar 

  • Croitoru, F. A., Hondru, V., Ionescu, R. T., & Shah, M. (2023). Diffusion models in vision: A survey. IEEE Trans Pattern Anal Mach Intell, 45(9), 10850–10869.

    Article  Google Scholar 

  • Dai, Q., Chopp, H., Pouyet, E., Cossairt, O., Walton, M., & Katsaggelos, A. K. (2020). Adaptive image sampling using deep learning and its application on x-ray fluorescence image reconstruction. IEEE Trans Multimedia, 22(10), 2564–2578.

    Article  Google Scholar 

  • Darabi, S., Shechtman, E., Barnes, C., Goldman, D. B., & Sen, P. (2012). Image Melding: combining inconsistent images using patch-based synthesis. ACM Trans Graph (Proc SIGGRAPH), 31(4), 1–10.

    Article  Google Scholar 

  • Daubechies, I. (1990). The wavelet transform, time-frequency localization and signal analysis. IEEE Trans Inf Theory, 36(5), 961–1005.

    Article  MathSciNet  Google Scholar 

  • Deng, J., Dong, W., Socher, R., Li, L. J., Li, K., & Fei-Fei, L. (2009). Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition (pp. 248-255). IEEE.

  • Deng, Y., Tang, F., Dong, W., Sun, W., Huang, F., & Xu, C. (2020). Arbitrary style transfer via multi-adaptation network. In Proceedings of the 28th ACM international conference on multimedia (pp. 2719-2727).

  • Deng, Y., Hui, S., Zhou, S., Meng, D., & Wang, J. (2021). Learning contextual transformer network for image inpainting. In Proceedings of the 29th ACM international conference on multimedia (pp. 2529-2538).

  • Deng, Y., Hui, S., Meng, R., Zhou, S., & Wang, J. (2022). Hourglass attention network for image inpainting. In European conference on computer vision (pp. 483-501). Springer Nature Switzerland.

  • Dinh, L., Krueger, D., & Bengio, Y. (2014). Nice: Non-linear independent components estimation. In Int Conf Learn Represent Worksh

  • Doersch, C., Singh, S., Gupta, A., Sivic, J., & Efros, A. A. (2012). What makes paris look like paris? ACM Transactions on Graphics, 31(4), 101.

    Article  Google Scholar 

  • Dolhansky, B., & Ferrer, C. C. (2018). Eye in-painting with exemplar generative adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition" (pp. 7902-7911).

  • Dong, Q., Cao, C., & Fu, Y. (2022). Incremental transformer structure enhanced image inpainting with masking positional encoding. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 11358-11368).

  • Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., & Houlsby, N. (2020). An image is worth 16x16 words: Transformers for image recognition at scale. In International conference on learning representations

  • Dosselmann, R., & Yang, X. D. (2011). A comprehensive assessment of the structural similarity index. Sign Image and Video Process, 5, 81–91.

    Article  Google Scholar 

  • Efros, A., & Leung, T. (1999). Texture synthesis by non-parametric sampling. Int. Conf. Comput. Vis., 2, 1033–1038.

    Google Scholar 

  • Elharrouss, O., Almaadeed, N., Al-Maadeed, S., & Akbari, Y. (2020). Image Inpainting: A Review. Neural Process Letters, 51(2), 2007–2028.

    Article  Google Scholar 

  • Esser, P., Rombach, R., Blattmann, A., & Ommer, B. (2021). ImageBART: Bidirectional Context with Multinomial Diffusion for Autoregressive Image Synthesis. Adv. Neural Inform. Process. Syst., 34, 3518–3532.

    Google Scholar 

  • Everingham, M., Eslami, S. M. A., Gool, L. V., Williams, C. K. I., Winn, J., & Zisserman, A. (2015). The pascal visual object classes challenge: A retrospective. Int J Comput Vis, 111, 98–136.

    Article  Google Scholar 

  • Felzenszwalb, P. F., & Huttenlocher, D. P. (2004). Efficient graph-based image segmentation. Int J Comput Vis, 59, 167–181.

    Article  Google Scholar 

  • Feng, X., Pei, W., Li, F., Chen, F., Zhang, D., & Lu, G. (2022). Generative memory-guided semantic reasoning model for image inpainting. IEEE Trans Circuit Syst Video Technol, 32(11), 7432–7447.

    Article  Google Scholar 

  • Fu, J., Liu, J., Tian, H., Li, Y., Bao, Y., Fang, Z., & Lu, H. (2019). Dual attention network for scene segmentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 3146-3154).

  • Galić, I., Weickert, J., Welk, M., Bruhn, A., Belyaev, A., & Seidel, H. P. (2008). Image compression with anisotropic diffusion. J Math Imaging Vis, 31, 255–269.

    Article  MathSciNet  Google Scholar 

  • Gao, C., Saraf, A., Huang, J. B., & Kopf, J. (2020). Flow-edge guided video completion. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XII 16 (pp. 713-729).

  • Gatys, L. A., Ecker, A. S., & Bethge, M. (2016). Image style transfer using convolutional neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2414-2423).

  • Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., & Bengio, Y. (2014). Generative adversarial nets. Advances in Neural Information Processing Systems, 27

  • Granados, M., Kim, K. I., Tompkin, J., Kautz, J., & Theobalt, C. (2012). Background inpainting for videos with dynamic objects and a free-moving camera. In Computer Vision-ECCV 2012: 12th European Conference on Computer Vision, Florence, Italy, October 7-13, 2012, Proceedings, Part I 12 (pp. 682-695).

  • Gu, J., Shen, Y., & Zhou, B. (2020). Image processing using multi-code gan prior. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 3012-3021).

  • Guillemot, C., & Meur, O. L. (2014). Image inpainting: Overview and recent advances. IEEE Sign Process Magazine, 31(1), 127–144.

    Article  Google Scholar 

  • Guo, Q., Gao, S., Zhang, X., Yin, Y., & Zhang, C. (2018). Patch-based image inpainting via two-stage low rank approximation. IEEE Trans Vis Comput Graph, 24(6), 2023–2036.

    Article  Google Scholar 

  • Guo, X., Yang, H., & Huang, D. (2021). Image inpainting via conditional texture and structure dual generation. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 14134-14143).

  • Guo, Z., Chen, Z., Yu, T., Chen, J., & Liu, S. (2019). Progressive image inpainting with full-resolution residual network. In Proceedings of the 27th ACM international conference on multimedia (pp. 2496-2504).

  • Han, C., & Wang, J. (2021). Face image inpainting with evolutionary generators. IEEE Sign Process Letters, 28, 190–193.

    Article  Google Scholar 

  • Han, X., Wu, Z., Huang, W., Scott, M. R., & Davis, L. S. (2019). Finet: Compatible and diverse fashion image inpainting. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 4481-4491).

  • He, K., & Sun, J. (2012). Statistics of patch offsets for image completion. In Computer Vision-ECCV 2012: 12th European conference on computer vision, Florence, Italy, October 7-13, 2012, Proceedings, Part II 12 (pp. 16-29).

  • He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778).

  • He, K., Chen, X., Xie, S., Li, Y., Dollár, P., & Girshick, R. (2022). Masked autoencoders are scalable vision learners. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 16000-16009).

  • Herling, J., & Broll, W. (2014). High-quality real-time video inpainting with pixmix. IEEE Trans Vis Comput Graph, 20(6), 866–879.

    Article  Google Scholar 

  • Hertz, A., Mokady, R., Tenenbaum, J., Aberman, K., Pritch, Y,. & Cohen-Or, D. (2022). Prompt-to-prompt image editing with cross attention control. arXiv preprint arXiv:2208.01626

  • Heusel M, Ramsauer H, Unterthiner T, Nessler B, Hochreiter S (2017) Gans trained by a two time-scale update rule converge to a local nash equilibrium. Advances in neural information processing systems, 30: 6626–6637

    Google Scholar 

  • Ho, J., Jain, A., & Abbeel, P. (2020). Denoising Diffusion Probabilistic Models. Adv. Neural Inform. Process. Syst., 33, 6840–6851.

    Google Scholar 

  • Hochreiter, S., & Schmidhuber, J. (1997). Long short-term memory. Neural Comput, 9(8), 1735–1780.

    Article  Google Scholar 

  • Hong, X., Xiong, P., Ji, R., & Fan, H. (2019). Deep fusion network for image completion. In Proceedings of the 27th ACM international conference on multimedia (pp. 2033-2042).

  • Hoogeboom, E., Nielsen, D., Jaini, P., Forré, P., & Welling, M. (2021). Argmax Flows and Multinomial Diffusion: Learning Categorical Distributions. Adv. Neural Inform. Process. Syst., 34, 12454–12465.

    Google Scholar 

  • Houle, M. E. (2017). Local intrinsic dimensionality I: an extreme-value-theoretic foundation for similarity applications. In Similarity search and applications: 10th international conference, SISAP 2017, Munich, Germany, October 4-6, 2017, Proceedings 10 (pp. 64-79). Springer International Publishing.

  • Houle, M. E. (2017). Local intrinsic dimensionality II: multivariate analysis and distributional support. In Similarity Search and Applications: 10th International Conference, SISAP 2017, Munich, Germany, October 4-6, 2017, Proceedings 10 (pp. 80-95). Springer International Publishing.

  • Hu, J., Shen, L., & Sun, G. (2018). Squeeze-and-excitation networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 7132-7141).

  • Hu, Y. T., Wang, H., Ballas, N., Grauman, K., & Schwing, A. G. (2020). Proposal-based video completion. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part XXVII 16 (pp. 38-54). Springer International Publishing.

  • Huang, J. B., Kang, S. B., Ahuja, N., & Kopf, J. (2014). Image completion using planar structure guidance. ACM Transactions on Graphics (Proc SIGGRAPH), 33(4), 1–10.

    Google Scholar 

  • Huang, J. B., Kang, S. B., Ahuja, N., & Kopf, J. (2016). Temporally coherent completion of dynamic video. ACM Trans Graph, 35(6), 1–11.

    Google Scholar 

  • Huang, X., & Belongie, S. (2017). Arbitrary style transfer in real-time with adaptive instance normalization. In Proceedings of the IEEE international conference on computer vision (pp. 1501-1510).

  • Hui, Z., Li, J., Wang, X., & Gao, X. (2020). Image fine-grained inpainting. arXiv preprint arXiv:2002.02609

  • Iizuka, S., Simo-Serra, E., & Ishikawa, H. (2017). Globally and locally consistent image completion. ACM Trans Graph (Proc SIGGRAPH), 36(4), 1–14.

    Article  Google Scholar 

  • Ilan, S., & Shamir, A. (2015). A survey on data-driven video completion. Comput Graph Forum, 34(6), 60–85.

    Article  Google Scholar 

  • Ilg, E., Mayer, N., Saikia, T., Keuper, M., Dosovitskiy, A., & Brox, T. (2017). Flownet 2.0: Evolution of optical flow estimation with deep networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2462-2470).

  • Isola, P., Zhu, J. Y., Zhou, T., & Efros, A. A. (2017). Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1125-1134).

  • Jam, J., Kendrick, C., Walker, K., Drouard, V., Hsu, J. G. S., & Yap, M. H. (2021). A comprehensive review of past and present image inpainting methods. Comput Vis Image Understand, 203, 103147.

    Article  Google Scholar 

  • Jiang, L., Dai, B., Wu, W., & Loy, C. C. (2021). Focal frequency loss for image reconstruction and synthesis. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 13919-13929).

  • Johnson, J., Alahi, A., & Fei-Fei, L. (2016). Perceptual losses for real-time style transfer and super-resolution. In Computer Vision-ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part II 14 (pp. 694-711). Springer International Publishing.

  • Kang, J., Oh, S. W., & Kim, S. J. (2022). Error compensation framework for flow-guided video inpainting. In European conference on computer vision (pp. 375-390). Cham: Springer Nature Switzerland.

  • Karras, T., Aila, T., Laine, S., & Lehtinen, J. (2018). Progressive growing of GANs for improved quality, stability, and variation. International conference on learning representations

  • Karras, T., Laine, S., & Aila, T. (2019). A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 4401-4410).

  • Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., & Aila, T. (2020). Analyzing and improving the image quality of stylegan. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 8110-8119).

  • Ke L, Tai YW, Tang CK (2021) Occlusion-aware video object inpainting. In: Int. Conf. Comput. Vis., pp 14468–14478

  • Kim, D., Woo, S., Lee, J. Y., & Kweon, I. S. (2019). Deep blind video decaptioning by temporal aggregation and recurrence. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 4263-4272).

  • Kim, D., Woo, S., Lee, J.Y., & Kweon, I.S. (2019b). Deep video inpainting. In IEEE conference on computer vision and pattern Recognition (pp. 5792–5801)

  • Kim, S. Y., Aberman, K., Kanazawa, N., Garg, R., Wadhwa, N., Chang, H., & Liba, O. (2022). Zoom-to-inpaint: Image inpainting with high-frequency details. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 477-487).

  • Kingma, D. P., & Dhariwal, P. (2018). Glow: generative flow with invertible 1x1 convolutions. Advances in Neural Information Processing Systems, 31.

  • Kingma, D. P., & Welling, M. (2013). Auto-encoding variational bayes. In IEEE conference on computer vision and pattern Recognition.

  • Lai, W. S., Huang, J. B., Wang, O., Shechtman, E., Yumer, E., & Yang, M. H. (2018). Learning blind video temporal consistency. In Proceedings of the European conference on computer vision (ECCV) (pp. 170-185).

  • Lao, D., Zhu, P., Wonka, P., & Sundaramoorthi, G. (2021). Flow-guided video inpainting with scene templates. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 14599-14608).

  • Ledig, C., Theis, L., Huszáir, F., Caballero, J., Cunningham, A., Acosta, A., & Shi, W. (2017). Photo-realistic single image super-resolution using a generative adversarial network. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4681-4690).

  • Lee, S., Oh, S. W., Won, D., & Kim, S. J. (2019). Copy-and-paste networks for deep video inpainting. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 4413-4421).

  • Lempitsky, V., Vedaldi, A., & Ulyanov, D. (2018). Deep image prior. IEEE conference on computer vision and pattern recognition (pp. 9446–9454)

  • Li, A., Qi, J., Zhang, R., Ma, X., & Ramamohanarao, K. (2019). Generative image inpainting with submanifold alignment. In International joint conference on artificial intelligence (pp. 811–817)

  • Li, A., Zhao, S., Ma, X., Gong, M., Qi, J., Zhang, R., & Kotagiri, R. (2020). Short-term and long-term context aggregation network for video inpainting. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part IV 16 (pp. 728-743). Springer International Publishing.

  • Li, A., Zhao, L., Zuo, Z., Wang, Z., Xing, W., & Lu, D. (2023). Migt: Multi-modal image inpainting guided with text. Neurocomputing, 520, 376–385.

    Article  Google Scholar 

  • Li, B., Zheng, B., Li, H., & Li, Y. (2021). Detail-enhanced image inpainting based on discrete wavelet transforms. Sign Process, 189, 108278.

    Article  Google Scholar 

  • Li, C. T., Siu, W. C., Liu, Z. S., Wang, L. W., & Lun, D. P. K. (2020). DeepGIN: Deep generative inpainting network for extreme image inpainting. In Computer Vision-ECCV 2020 Workshops: Glasgow, UK, August 23-28, 2020, Proceedings, Part IV 16 (pp. 5-22). Springer International Publishing.

  • Li, F., Li, A., Qin, J., Bai, H., Lin, W., Cong, R., & Zhao, Y. (2022). Srinpaintor: When super-resolution meets transformer for image inpainting. IEEE Trans Computational Imaging, 8, 743–758.

    Article  Google Scholar 

  • Li, H., Li, G., Lin, L., Yu, H., & Yu, Y. (2018). Context-aware semantic inpainting. IEEE transactions on cybernetics, 49(12), 4398-4411.

    Article  Google Scholar 

  • Li, J., He, F., Zhang, L., Du, B., & Tao, D. (2019). Progressive reconstruction of visual structure for image inpainting. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 5962-5971).

  • Li, J., Wang, N., Zhang, L., Du, B., & Tao, D. (2020). Recurrent feature reasoning for image inpainting. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 7760-7768).

  • Li, W., Lin, Z., Zhou, K., Qi, L., Wang, Y., & Jia, J. (2022). Mat: Mask-aware transformer for large hole image inpainting. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 10758-10768).

  • Li, W., Yu, X., Zhou, K., Song, Y., Lin, Z., & Jia, J. (2022). Sdm: Spatial diffusion model for large hole image inpainting. arXiv preprint arXiv:2212.02963

  • Li, Y., Liu, S., Yang, J., & Yang, M. H. (2017). Generative face completion. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3911-3919).

  • Li, Y., Jiang, B., Lu, Y., & Shen, L. (2019). Fine-grained adversarial image inpainting with super resolution. In 2019 International Joint Conference on Neural Networks (IJCNN) (pp. 1-8). IEEE.

  • Li, Z., Lu, C. Z., Qin, J., Guo, C. L., & Cheng, M. M. (2022). Towards an end-to-end framework for flow-guided video inpainting. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 17562-17571).

  • Liao, H., Funka-Lea, G., Zheng, Y., Luo, J., & Zhou, S. K. (2018). Face Completion with Semantic Knowledge and Collaborative Adversarial Learning. Asian Conf. Comput. Vis., 11361, 382–397.

    Google Scholar 

  • Liao, L., Hu, R., Xiao, J., & Wang, Z. (2018). Edge-aware context encoder for image inpainting. In 2018 IEEE International conference on acoustics, speech and signal processing (ICASSP) (pp. 3156-3160). IEEE.

  • Liao, L., Xiao, J., Wang, Z., Lin, C. W., & Satoh, S. I. (2020). Guidance and evaluation: Semantic-aware image inpainting for mixed scenes. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part XXVII 16 (pp. 683-700). Springer International Publishing.

  • Liao, L., Xiao, J., Wang, Z., Lin, C. W., & Satoh, S. I. (2021a). Image inpainting guided by coherence priors of semantics and textures. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 6539-6548).

  • Liao, L., Xiao, J., Wang, Z., Lin, C. W., & Satoh, S. (2021). Uncertainty-aware semantic guidance and estimation for image inpainting. IEEE J Selected Topics Sign Process, 15(2), 310–323.

    Article  Google Scholar 

  • Lim, J.H., & Ye, J.C. (2017). Geometric gan. arXiv preprint arXiv:1705.02894

  • Lin, J., Gan, C., & Han, S. (2019). Tsm: Temporal shift module for efficient video understanding. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 7083-7093).

  • Lin, Q., Yan, B., Li, J., & Tan, W. (2020, October). Mmfl: Multimodal fusion learning for text-guided image inpainting. In Proceedings of the 28th ACM international conference on multimedia (pp. 1094-1102).

  • Liu, G., Reda, F. A., Shih, K. J., Wang, T. C., Tao, A., & Catanzaro, B. (2018). Image inpainting for irregular holes using partial convolutions. In Proceedings of the European conference on computer vision (ECCV) (pp. 85-100).

  • Liu, H., Jiang, B., Xiao, Y., & Yang, C. (2019). Coherent semantic attention for image inpainting. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 4170-4179).

  • Liu, H., Jiang, B., Song, Y., Huang, W., & Yang, C. (2020). Rethinking image inpainting via a mutual encoder-decoder with feature equalizations. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part II 16 (pp. 725-741). Springer International Publishing.

  • Liu, H., Wan, Z., Huang, W., Song, Y., Han, X., & Liao, J. (2021). Pd-gan: Probabilistic diverse gan for image inpainting. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 9371-9381).

  • Liu, R., Deng, H., Huang, Y., Shi, X., Lu, L., Sun, W., & Li, H. (2021). Fuseformer: Fusing fine-grained information in transformers for video inpainting. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 14040-14049).

  • Liu, T., Liao, L., Wang, Z., & Satoh, S. I. (2022). Reference-guided texture and structure inference for image inpainting. In 2022 IEEE international conference on image processing (ICIP) (pp. 1996-2000). IEEE.

  • Liu, W., Cao, C., Liu, J., Ren, C., Wei, Y., & Guo, H. (2021). Fine-grained image inpainting with scale-enhanced generative adversarial network. Pattern Recognition Letters, 143: 81–87.

    Article  Google Scholar 

  • Liu, Z., Luo, P., Wang, X., & Tang, X. (2015). Deep learning face attributes in the wild. In Proceedings of the IEEE international conference on computer vision (pp. 3730-3738).

  • Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., & Guo, B. (2021). Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 10012-10022).

  • Lu, Z., Jiang, J., Huang, J., Wu, G., & Liu, X. (2022). Glama: Joint spatial and frequency loss for general image inpainting. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 1301-1310).

  • Lugmayr, A., Danelljan, M., Van Gool, L., & Timofte, R. (2020). Srflow: Learning the super-resolution space with normalizing flow. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part V 16 (pp. 715-732). Springer International Publishing.

  • Lugmayr, A., Danelljan, M., Romero, A., Yu, F., Timofte, R., & Van Gool, L. (2022). Repaint: Inpainting using denoising diffusion probabilistic models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition

  • Ma, Y., Liu, X., Bai, S., Wang, L., He, D., & Liu, A. (2019, August). Coarse-to-fine image inpainting via region-wise convolutions and non-local correlation. In Ijcai (pp. 3123-3129).

  • Mallat, S. G. (1989). A theory for multiresolution signal decomposition: the wavelet representation. IEEE Trans Pattern Anal Mach Intell, 11(7), 674–693.

    Article  Google Scholar 

  • Mao, X., Li, Q., Xie, H., Lau, R. Y., Wang, Z., & Paul Smolley, S. (2017). Least squares generative adversarial networks. In Proceedings of the IEEE international conference on computer vision (pp. 2794-2802).

  • Masnou, S., & Morel, J. M. (1998, October). Level lines based disocclusion. In Proceedings 1998 International Conference on Image Processing. ICIP98 (Cat. No. 98CB36269) (pp. 259-263). IEEE.

  • Navasardyan, S., & Ohanyan, M. (2020). Image inpainting with onion convolutions. In Proceedings of the Asian conference on computer vision

  • Nazeri, K., Ng, E., Joseph, T., Qureshi, F., & Ebrahimi, M. (2019). Edgeconnect: Structure guided image inpainting using edge prediction. In Proceedings of the IEEE/CVF international conference on computer vision workshops (pp. 0-0).

  • Newson, A., Almansa, A., Fradet, M., Gousseau, Y., & Pérez, P. (2014). Video inpainting of complex scenes. SIAM J Imaging Sciences, 7(4), 1993–2019.

    Article  MathSciNet  Google Scholar 

  • Ni, M., Li, X., & Zuo, W. (2023). NUWA-LIP: language-guided image inpainting with defect-free VQGAN. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 14183-14192).

  • Oh, S. W., Lee, S., Lee, J. Y., & Kim, S. J. (2019). Onion-peel networks for deep video completion. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 4403-4412).

  • Ojala, T., & Pietikäinen M, Harwood D, (1996). A comparative study of texture measures with classification based on featured distributions. Pattern Recog, 29(1), 51–59.

    Article  Google Scholar 

  • Ojala, T., Pietikainen, M., & Maenpaa, T. (2002). Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans Pattern Anal Mach Intell, 24(7), 971–987.

    Article  Google Scholar 

  • Ouyang, H., Wang, T., & Chen, Q. (2021). Internal video inpainting by implicit long-range propagation. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 14579-14588).

  • Park, T., Liu, M. Y., Wang, T. C., & Zhu, J. Y. (2019). Semantic image synthesis with spatially-adaptive normalization. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 2337-2346).

  • Parmar, G., Singh, K.K., Zhang, R., Li, Y., Lu, J., Zhu, J.Y. (2023). Zero-shot image-to-image translation. arXiv preprint arXiv:2302.03027

  • Pathak, D., Krahenbuhl, P., Donahue, J., Darrell, T., & Efros, A. A. (2016). Context encoders: Feature learning by inpainting. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2536-2544).

  • Peng, J., Liu, D., Xu, S., & Li, H. (2021). Generating diverse structure for image inpainting with hierarchical VQ-VAE. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 10770-10779)

  • Perazzi, F., Pont-Tuset, J., McWilliams, B., Van Gool, L., Gross, M., & Sorkine-Hornung, A. (2016). A benchmark dataset and evaluation methodology for video object segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 724-732).

  • Phutke, S. S., & Murala, S. (2021). Diverse receptive field based adversarial concurrent encoder network for image inpainting. IEEE Sign Process Letters, 28, 1873–1877.

    Article  Google Scholar 

  • Qin, J., Bai, H., & Zhao, Y. (2021). Multi-scale attention network for image inpainting. Comput Vis Image Understand, 204, 103155.

    Article  Google Scholar 

  • Qiu, J., Gao, Y., & Shen, M. (2021). Semantic-sca: Semantic structure image inpainting with the spatial-channel attention. IEEE Access, 9, 12997–13008.

    Article  Google Scholar 

  • Quan, W., Zhang, R., Zhang, Y., Li, Z., Wang, J., & Yan, D. M. (2022). Image inpainting with local and global refinement. IEEE Trans Image Process, 31, 2405–2420.

    Article  Google Scholar 

  • Radford, A., Metz, L., & Chintala, S. (2016). Unsupervised representation learning with deep convolutional generative adversarial networks. In The International Conference on Learning Representations

  • Ren, J., Zheng, Q., Zhao, Y., Xu, X., & Li, C. (2022). Dlformer: Discrete latent transformer for video inpainting. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 3511-3520).

  • Ren, J. S., Xu, L., Yan, Q., & Sun, W. (2015). Shepard Convolutional Neural Networks. Adv. Neural Inform. Process. Syst., 28, 901–909.

    Google Scholar 

  • Ren, Y., Yu, X., Zhang, R., Li, T. H., Liu, S., & Li, G. (2019). Structureflow: Image inpainting via structure-aware appearance flow. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 181-190).

  • Rezende, D., & Mohamed, S. (2015). Variational inference with normalizing flows. In International conference on machine learning (pp. 1530-1538).

  • Richardson, E., Alaluf, Y., Patashnik, O., Nitzan, Y., Azar, Y., Shapiro, S., & Cohen-Or, D. (2021). Encoding in style: a stylegan encoder for image-to-image translation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 2287-2296).

  • Rombach, R., Blattmann, A., Lorenz, D., Esser, P., & Ommer, B. (2022). High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 10684-10695).

  • Rössler, A., Cozzolino, D., Verdoliva, L., Riess, C., Thies, J., Nießner, M. (2018). Faceforensics: A large-scale video dataset for forgery detection in human faces. arXiv preprint arXiv:1803.09179

  • Roy, H., Chaudhury, S., Yamasaki, T., & Hashimoto, T. (2021). Image inpainting using frequency-domain priors. J Electronic Imaging, 30(2), 023016.

    Article  Google Scholar 

  • Ruder, M., Dosovitskiy, A., & Brox, T. (2016). Artistic style transfer for videos. In Pattern Recognition: 38th German Conference, GCPR 2016, Hannover, Germany, September 12-15, 2016, Proceedings 38 (pp. 26-36). Springer International Publishing.

  • Rudin, L. I., Osher, S., & Fatemi, E. (1992). Nonlinear total variation based noise removal algorithms. Physica D: Nonlinear Phenomena, 60(1), 259–268.

    Article  MathSciNet  Google Scholar 

  • Sagong, M. C., Shin, Y. G., Kim, S. W., Park, S., & Ko, S. J. (2019). Pepsi: Fast image inpainting with parallel decoding network. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 11360-11368).

  • Saharia, C., Chan, W., Chang, H., Lee, C., Ho, J., Salimans, T., & Norouzi, M. (2022). Palette: Image-to-image diffusion models. In ACM SIGGRAPH 2022 conference proceedings (pp. 1-10).

  • Saharia, C., Chan, W., Saxena, S., Li, L., Whang, J., Denton, E. L., & Norouzi, M. (2022). Photorealistic text-to-image diffusion models with deep language understanding. Advances in Neural Information Processing Systems, 35, 36479–36494.

    Google Scholar 

  • Schrader, K., Peter, P., Käamper, N., & Weickert, J. (2023). Efficient neural generation of 4k masks for homogeneous diffusion inpainting. In International conference on scale space and variational methods in computer vision (pp. 16-28).

  • Schuhmann C, Beaumont R, Vencu R, Gordon CW, Wightman R, Cherti M, Coombes T, Katta A, Mullis C, Wortsman M, Schramowski P, Kundurthy SR, Crowson K, Schmidt L, Kaczmarczyk R, Jitsev J (2022) LAION-5B: An open large-scale dataset for training next generation image-text models. Advances in Neural Information Processing Systems, 35, 25278–25294.

    Google Scholar 

  • Shao, H., Wang, Y., Fu, Y., & Yin, Z. (2020). Generative image inpainting via edge structure and color aware fusion. Sign Process: Image Communication, 87, 115929.

    Google Scholar 

  • Shen, L., Hong, R., Zhang, H., Zhang, H., & Wang, M. (2019). Single-shot semantic image inpainting with densely connected generative networks. In Proceedings of the 27th ACM International Conference on Multimedia (pp. 1861-1869).

  • Shin, Y. G., Sagong, M. C., Yeo, Y. J., Kim, S. W., & Ko, S. J. (2021). Pepsi++: Fast and lightweight network for image inpainting. IEEE Trans Neural Networks Learn Syst, 32(1), 252–265.

    Article  Google Scholar 

  • Shukla, T., Maheshwari, P., Singh, R., Shukla, A., Kulkarni, K., & Turaga, P. (2023). Scene graph driven text-prompt generation for image inpainting. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 759-768).

  • Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556

  • Sohl-Dickstein, J., Weiss, E., Maheswaranathan, N., & Ganguli, S. (2015). Deep unsupervised learning using nonequilibrium thermodynamics. In International conference on machine learning (pp. 2256-2265). PMLR.

  • Song, L., Cao, J., Song, L., Hu, Y., & He, R. (2019). Geometry-aware face completion and editing. In Proceedings of the AAAI conference on artificial intelligence (Vol. 33, No. 01, pp. 2506-2513).

  • Song, Y., Yang, C., Lin, Z., Liu, X., Huang, Q., Li, H., & Kuo, C. C. J. (2018). Contextual-based image inpainting: Infer, match, and translate. In Proceedings of the European conference on computer vision (ECCV) (pp. 3-19).

  • Song, Y., Yang, C., Shen, Y., Wang, P., Huang, Q., & Kuo, C. C. J. (2018b). SPG-Net: Segmentation prediction and guidance network for image inpainting. In: Brit. Mach. Vis. Conf.

  • Sun, D., Yang, X., Liu, M. Y., & Kautz, J. (2018). Pwc-net: Cnns for optical flow using pyramid, warping, and cost volume. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 8934-8943).

  • Sun, K., Xiao, B., Liu, D., & Wang, J. (2019). Deep high-resolution representation learning for human pose estimation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 5693-5703).

  • Sun, Q., Ma, L., Oh, S. J., Van Gool, L., Schiele, B., & Fritz, M. (2018b). Natural and effective obfuscation by head inpainting. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 5050-5059).

  • Suvorov, R., Logacheva, E., Mashikhin, A., Remizova, A., Ashukha, A., Silvestrov, A., & Lempitsky, V. (2022). Resolution-robust large mask inpainting with fourier convolutions. In Proceedings of the IEEE/CVF winter conference on applications of computer vision (pp. 2149-2159).

  • Tabak, E. G., & Vanden-Eijnden, E. (2010). Density estimation by dual ascent of the log-likelihood. Commun Math Sci, 8(1), 217–233.

    Article  MathSciNet  Google Scholar 

  • Tschumperlé, D., & Deriche, R. (2005). Vector-valued image regularization with pdes: a common framework for different applications. IEEE Trans Pattern Anal Mach Intell, 27(4), 506–517.

    Article  Google Scholar 

  • Tu, C. T., & Chen, Y. F. (2019, August). Facial image inpainting with variational autoencoder. In 2019 2nd international conference of intelligent robotic and control engineering (IRCE) (pp. 119-122). IEEE.

  • Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., & Polosukhin, I. (2017). Attention is all you need. Advances in Neural Information Processing Systems, 30.

  • Vo, H. V., Duong, N. Q., & Pérez, P. (2018). Structural inpainting. In Proceedings of the 26th ACM international conference on multimedia (pp. 1948-1956).

  • Wadhwa, G., Dhall, A., Murala, S., & Tariq, U. (2021). Hyperrealistic image inpainting with hypergraphs. In Proceedings of the IEEE/CVF winter conference on applications of computer vision (pp. 3912-3921).

  • Wan, Z., Zhang, B., Chen, D., Zhang, P., Chen, D., Liao, J., & Wen, F. (2020). Bringing old photos back to life. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 2747-2757).

  • Wan, Z., Zhang, J., Chen, D., & Liao, J. (2021). High-fidelity pluralistic image completion with transformers. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 4692-4701).

  • Wang, C., Huang, H., Han, X., & Wang, J. (2019). Video inpainting by jointly learning temporal structure and spatial details. In Proceedings of the AAAI conference on artificial intelligence (pp. 5232-5239).

  • Wang, C., Zhu, Y., & Yuan, C. (2022). Diverse Image Inpainting with Normalizing Flow. In European conference on computer vision (pp. 53-69).

  • Wang, J., Wang, C., Huang, Q., Shi, Y., Cai, J. F., Zhu, Q., & Yin, B. (2020). Image inpainting based on multi-frequency probabilistic inference model. In Proceedings of the 28th ACM international conference on multimedia (pp. 1-9).

  • Wang, N., Li, J., Zhang, L., & Du, B. (2019). MUSICAL: Multi-scale image contextual attention learning for inpainting. In: IJCAI (pp. 3748-3754).

  • Wang, N., Ma, S., Li, J., Zhang, Y., & Zhang, L. (2020). Multistage attention network for image inpainting. Pattern Recog, 106, 107448.

    Article  Google Scholar 

  • Wang, N., Zhang, Y., & Zhang, L. (2021). Dynamic selection network for image inpainting. IEEE Trans Image Process, 30, 1784–1798.

    Article  Google Scholar 

  • Wang, S., Saharia, C., Montgomery, C., Pont-Tuset, J., Noy, S., Pellegrini, S., & Chan, W. (2023). Imagen editor and editbench: Advancing and evaluating text-guided image inpainting. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 18359-18369).

  • Wang, T., Ouyang, H., & Chen, Q. (2021). Image inpainting with external-internal learning and monochromic bottleneck. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 5120-5129).

  • Wang, T. C., Liu, M. Y., Zhu, J. Y., Liu, G., Tao, A., Kautz, J., & Catanzaro, B. (2018a). Video-to-Video Synthesis. AAdvances in Neural Information Processing Systems, 31

  • Wang, W., Zhang, J., Niu, L., Ling, H., Yang, X., & Zhang, L. (2021). Parallel multi-resolution fusion network for image inpainting. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 14559-14568).

  • Wang, W., Niu, L., Zhang, J., Yang, X., & Zhang, L. (2022b). Dual-path image inpainting with auxiliary gan inversion. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 11421-11430).

  • Wang, X., Girshick, R., Gupta, A., & He, K. (2018). Non-local neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 7794-7803).

  • Wang, Y., Tao, X., Qi, X., Shen, X., & Jia, J. (2018). Image inpainting via generative multi-column convolutional neural networks. Advances in Neural Information Processing Systems, 31.

  • Wang, Y., Chen, Y. C., Tao, X., & Jia, J. (2020). Vcnet: A robust approach to blind image inpainting. In Computer Vision-ECCV 2020: 16th European Conference, Glasgow, UK, August 23-28, 2020, Proceedings, Part XXV 16 (pp. 752-768). Springer International Publishing.

  • Wang, Z., Simoncelli, E. P., & Bovik, A. C. (2003, November). Multiscale structural similarity for image quality assessment. In The Thrity-seventh asilomar conference on signals, systems & computers, 2003 (Vol. 2, pp. 1398-1402). Ieee.

  • Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. (2004). Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process, 13(4), 600–612.

    Article  Google Scholar 

  • Weng, Y., Ding, S., & Zhou, T. (2022). A survey on improved GAN based image inpainting. In 2022 2nd international conference on consumer electronics and computer engineering (ICCECE) (pp. 319-322). IEEE.

  • Wexler, Y., Shechtman, E., & Irani, M. (2007). Space-time completion of video. IEEE Trans Pattern Anal Mach Intell, 29(3), 463–476.

    Article  Google Scholar 

  • Woo, S., Kim, D., Park, K., Lee, J. Y., & Kweon, I. S. (2019). Align-and-attend network for globally and locally coherent video inpainting. In The British Machine Vision Conference (BMVC) (pp.1–13)

  • Wu, H., Zhou, J., & Li, Y. (2022). Deep generative model for image inpainting with local binary pattern learning and spatial attention. IEEE Trans Multimedia, 24, 4016–4027.

    Article  Google Scholar 

  • Wu, L., Zhang, C., Liu, J., Han, J., Liu, J., Ding, E., Bai, X. (2019). Editing text in the wild. In ngs of the 27th ACM international conference on multimedia(pp. 1500-1508).

  • Wu X, Xie Y, Zeng J, Yang Z, Yu Y, Li Q, Liu W (2021) Adversarial learning with mask reconstruction for text-guided image inpainting. In: ACM Int. Conf. Multimedia, pp 3464–3472

  • Xia W, Zhang Y, Yang Y, Xue JH, Zhou B, Yang MH (2022) Gan inversion: A survey. IEEE Trans Pattern Anal Mach Intell pp 1–17

  • Xie C, Liu S, Li C, Cheng MM, Zuo W, Liu X, Wen S, Ding E (2019) Image Inpainting with Learnable Bidirectional Attention Maps. In: Int. Conf. Comput. Vis., pp 8858–8867

  • Xie M, Li C, Liu X, Wong TT (2020) Manga filling style conversion with screentone variational autoencoder. ACM Trans Graph 39(6)

  • Xie M, Xia M, Liu X, Li C, Wong TT (2021) Seamless manga inpainting with semantics awareness. ACM Trans Graph 40(4)

  • Xie S, Zhang Z, Lin Z, Hinz T, Zhang K (2023) SmartBrush: Text and Shape Guided Object Inpainting With Diffusion Model. In: IEEE Conf. Comput. Vis. Pattern Recog., pp 22428–22437

  • Xie, Y., Lin, Z., Yang, Z., Deng, H., Wu, X., Mao, X., Li, Q., & Liu, W. (2022). Learning semantic alignment from image for text-guided image inpainting. The Visual Computer, 38(9–10), 3149–3161.

    Article  Google Scholar 

  • Xiong W, Yu J, Lin Z, Yang J, Lu X, Barnes C, Luo J (2019) Foreground-Aware Image Inpainting. In: IEEE Conf. Comput. Vis. Pattern Recog., pp 5833–5841

  • Xu N, Yang L, Fan Y, Yang J, Yue D, Liang Y, Price B, Cohen S, Huang T (2018a) YouTube-VOS: Sequence-to-Sequence Video Object Segmentation. In: Eur. Conf. Comput. Vis., pp 585–601

  • Xu R, Li X, Zhou B, Loy CC (2019) Deep flow-guided video inpainting. In: IEEE Conf. Comput. Vis. Pattern Recog., pp 3723–3732

  • Xu, R., Guo, M., Wang, J., Li, X., Zhou, B., & Loy, C. C. (2021). Texture memory-augmented deep patch-based image inpainting. IEEE Trans Image Process, 30, 9112–9124.

    Article  Google Scholar 

  • Xu T, Zhang P, Huang Q, Zhang H, Gan Z, Huang X, He X (2018b) Attngan: Fine-grained text to image generation with attentional generative adversarial networks. In: IEEE Conf. Comput. Vis. Pattern Recog., pp 1316–1324

  • Yamashita Y, Shimosato K, Ukita N (2022) Boundary-Aware Image Inpainting With Multiple Auxiliary Cues. In: IEEE Conf. Comput. Vis. Pattern Recog. Worksh., pp 619–629

  • Yan Z, Li X, Li M, Zuo W, Shan S (2018) Shift-Net: Image Inpainting via Deep Feature Rearrangement. In: Eur. Conf. Comput. Vis., pp 3–19

  • Yang C, Lu X, Lin Z, Shechtman E, Wang O, Li H (2017) High-resolution image inpainting using multi-scale neural patch synthesis. In: IEEE Conf. Comput. Vis. Pattern Recog., pp 6721–6729

  • Yang, J., Qi, Z., & Shi, Y. (2020). Learning to Incorporate Structure Knowledge for Image Inpainting. AAAI Conf. Artificial Intell., 34, 12605–12612.

    Google Scholar 

  • Yang L, Zhang Z, Song Y, Hong S, Xu R, Zhao Y, Zhang W, Cui B, Yang MH (2023) Diffusion models: A comprehensive survey of methods and applications. arxiv:2209.00796

  • Yeh RA, Chen C, Lim TY, Schwing AG, Hasegawa-Johnson M, Do MN (2017) Semantic Image Inpainting with Deep Generative Models. In: IEEE Conf. Comput. Vis. Pattern Recog., pp 6882–6890

  • Yi Z, Tang Q, Azizi S, Jang D, Xu Z (2020) Contextual Residual Aggregation for Ultra High-Resolution Image Inpainting. In: IEEE Conf. Comput. Vis. Pattern Recog., pp 7508–7517

  • Yu F, Koltun V (2016) Multi-Scale Context Aggregation by Dilated Convolutions. In: Int. Conf. Learn. Represent.

  • Yu J, Lin Z, Yang J, Shen X, Lu X, Huang TS (2018) Generative image inpainting with contextual attention. In: IEEE Conf. Comput. Vis. Pattern Recog., pp 5505–5514

  • Yu J, Lin Z, Yang J, Shen X, Lu X, Huang TS (2019) Free-form image inpainting with gated convolution. In: Int. Conf. Comput. Vis., pp 4471–4480

  • Yu T, Guo Z, Jin X, Wu S, Chen Z, Li W, Zhang Z, Liu S (2020) Region Normalization for Image Inpainting. In: AAAI Conf. Artificial Intell., pp 12733–12740

  • Yu Y, Zhan F, Lu S, Pan J, Ma F, Xie X, Miao C (2021a) WaveFill: A Wavelet-Based Generation Network for Image Inpainting. In: Int. Conf. Comput. Vis., pp 14114–14123

  • Yu Y, Zhan F, WU R, Pan J, Cui K, Lu S, Ma F, Xie X, Miao C (2021b) Diverse Image Inpainting with Bidirectional and Autoregressive Transformers. In: ACM Int. Conf. Multimedia, p 69-78

  • Yu Y, Du D, Zhang L, Luo T (2022a) Unbiased Multi-modality Guidance for Image Inpainting. In: Eur. Conf. Comput. Vis., pp 668–684

  • Yu Y, Zhang L, Fan H, Luo T (2022b) High-Fidelity Image Inpainting with GAN Inversion. In: Eur. Conf. Comput. Vis., pp 242–258

  • Zeng Y, Fu J, Chao H, Guo B (2019) Learning pyramid-context encoder network for high-quality image inpainting. In: IEEE Conf. Comput. Vis. Pattern Recog., pp 1486–1494

  • Zeng, Y., Fu, J., & Chao, H. (2020). Learning Joint Spatial-Temporal Transformations for Video Inpainting. Eur (pp. 528–543). Conf. Comput. Vis.: Springer.

  • Zeng Y, Lin Z, Yang J, Zhang J, Shechtman E, Lu H (2020b) High-Resolution Image Inpainting with Iterative Confidence Feedback and Guided Upsampling. In: Eur. Conf. Comput. Vis.

  • Zeng, Y., Gong, Y., & Zhang, J. (2021). Feature learning and patch matching for diverse image inpainting. Pattern Recog, 119, 108036.

    Article  Google Scholar 

  • Zeng Y, Lin Z, Lu H, Patel VM (2021b) CR-Fill: Generative Image Inpainting With Auxiliary Contextual Reconstruction. In: Int. Conf. Comput. Vis., pp 14164–14173

  • Zeng Y, Fu J, Chao H, Guo B (2022) Aggregated contextual transformations for high-resolution image inpainting. IEEE Trans Vis Comput Graph pp 1–1

  • Zhang, B., Gao, Y., Zhao, S., & Liu, J. (2010). Local derivative pattern versus local binary pattern: Face recognition with high-order local pattern descriptor. IEEE Trans Image Process, 19(2), 533–544.

    Article  MathSciNet  Google Scholar 

  • Zhang H, Hu Z, Luo C, Zuo W, Wang M (2018a) Semantic Image Inpainting with Progressive Generative Networks. In: ACM Int. Conf. Multimedia, p 1939-1947

  • Zhang H, Mai L, Xu N, Wang Z, Collomosse J, Jin H (2019a) An internal learning approach to video inpainting. In: Int. Conf. Comput. Vis., pp 2720–2729

  • Zhang J, Niu L, Yang D, Kang L, Li Y, Zhao W, Zhang L (2019b) GAIN: Gradient Augmented Inpainting Network for Irregular Holes. In: ACM Int. Conf. Multimedia, p 1870-1878

  • Zhang K, Fu J, Liu D (2022a) Flow-Guided Transformer for Video Inpainting. In: Eur. Conf. Comput. Vis., pp 74–90

  • Zhang K, Fu J, Liu D (2022b) Inertia-Guided Flow Completion and Style Fusion for Video Inpainting. In: IEEE Conf. Comput. Vis. Pattern Recog., pp 5982–5991

  • Zhang L, Agrawala M (2023) Adding conditional control to text-to-image diffusion models. arxiv:2302.05543

  • Zhang L, Chen Q, Hu B, Jiang S (2020a) Text-Guided Neural Image Inpainting. In: ACM Int. Conf. Multimedia, p 1302-1310

  • Zhang L, Barnes C, Wampler K, Amirghodsi S, Shechtman E, Lin Z, Shi J (2022c) Inpainting at Modern Camera Resolution by Guided PatchMatch with Auto-curation. In: Eur. Conf. Comput. Vis., pp 51–67

  • Zhang, L., Zhou, Y., Barnes, C., Amirghodsi, S., Lin, Z., Shechtman, E., & Shi, J. (2022). Perceptual Artifacts Localization for Inpainting. Computer Vision - ECCV, 2022, 146–164.

    Google Scholar 

  • Zhang R, Isola P, Efros AA, Shechtman E, Wang O (2018b) The Unreasonable Effectiveness of Deep Features as a Perceptual Metric. In: IEEE Conf. Comput. Vis. Pattern Recog., pp 586–595

  • Zhang R, Quan W, Wu B, Li Z, Yan DM (2020b) Pixel-wise dense detector for image inpainting. Comput Graph Forum 39(7)

  • Zhang R, Quan W, Zhang Y, Wang J, Yan DM (2022e) W-net: Structure and texture interaction for image inpainting. IEEE Trans Multimedia pp 1–12

  • Zhang, S., He, R., Sun, Z., & Tan, T. (2018). Demeshnet: Blind face inpainting for deep meshface verification. IEEE Trans Inf Forensics Secur, 13(3), 637–647.

    Article  Google Scholar 

  • Zhang W, Zhu J, Tai Y, Wang Y, Chu W, Ni B, Wang C, Yang X (2021) Context-Aware Image Inpainting with Learned Semantic Priors. In: Int. Joint Conf. Artificial Intell., pp 1323–1329

  • Zhang Z, Zhao Z, Zhang Z, Huai B, Yuan J (2020c) Text-guided image inpainting. In: ACM Int. Conf. Multimedia, pp 4079–4087

  • Zhao L, Mo Q, Lin S, Wang Z, Zuo Z, Chen H, Xing W, Lu D (2020) UCTGAN: Diverse Image Inpainting Based on Unsupervised Cross-Space Translation. In: IEEE Conf. Comput. Vis. Pattern Recog., pp 5740–5749

  • Zhao S, Cui J, Sheng Y, Dong Y, Liang X, Chang EI, Xu Y (2021) Large Scale Image Completion via Co-Modulated Generative Adversarial Networks. In: Int. Conf. Learn. Represent.

  • Zhao W, Rao Y, Liu Z, Liu B, Zhou J, Lu J (2023) Unleashing text-to-image diffusion models for visual perception. arXiv preprint arXiv:2303.02153

  • Zheng C, Cham TJ, Cai J (2019) Pluralistic Image Completion. In: IEEE Conf. Comput. Vis. Pattern Recog., pp 1438–1447

  • Zheng, C., Cham, T. J., & Cai, J. (2021). Pluralistic free-form image completion. Int J Comput Vis, 129, 2786–2805.

    Article  Google Scholar 

  • Zheng C, Cham TJ, Cai J, Phung D (2022a) Bridging Global Context Interactions for High-Fidelity Image Completion. In: IEEE Conf. Comput. Vis. Pattern Recog., pp 11512–11522

  • Zheng H, Zhang Z, Wang Y, Zhang Z, Xu M, Yang Y, Wang M (2021b) GCM-Net: Towards Effective Global Context Modeling for Image Inpainting. In: ACM Int. Conf. Multimedia, p 2586–2594

  • Zheng H, Lin Z, Lu J, Cohen S, Shechtman E, Barnes C, Zhang J, Xu N, Amirghodsi S, Luo J (2022b) Image Inpainting with Cascaded Modulation GAN and Object-Aware Training. In: Eur. Conf. Comput. Vis., pp 277–296

  • Zhou, B., Lapedriza, A., Khosla, A., Oliva, A., & Torralba, A. (2017). Places: A 10 million image database for scene recognition. IEEE Trans Pattern Anal Mach Intell, 40(6), 1452–1464.

    Article  Google Scholar 

  • Zhou, B., Zhao, H., Puig, X., Xiao, T., Fidler, S., Barriuso, A., & Torralba, A. (2018). Semantic understanding of scenes through the ade20k dataset. Int J Comput Vis, 127, 302–321.

    Article  Google Scholar 

  • Zhou X, Li J, Wang Z, He R, Tan T (2021) Image Inpainting with Contrastive Relation Network. In: Int. Conf. Pattern Recog., pp 4420–4427

  • Zhu, M., He, D., Li, X., Li, C., Li, F., Liu, X., Ding, E., & Zhang, Z. (2021). Image inpainting by end-to-end cascaded refinement with mask awareness. IEEE Trans Image Process, 30, 4855–4866.

    Article  Google Scholar 

  • Zou X, Yang L, Liu D, Lee YJ (2021) Progressive temporal feature alignment network for video inpainting. In: IEEE Conf. Comput. Vis. Pattern Recog., pp 16448–16457

Download references

Acknowledgements

This work was partially supported by the National Natural Science Foundation of China (62102418, 62172415, 61972271), the Beijing Science and Technology Plan Project (Z231100005923033), and the Open Project Program of National Key Laboratory of Fundamental Science on Synthetic Vision, Sichuan University (No.2021SCUVS002).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dong-Ming Yan.

Additional information

Communicated by Jiaya Jia.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Quan, W., Chen, J., Liu, Y. et al. Deep Learning-Based Image and Video Inpainting: A Survey. Int J Comput Vis (2024). https://doi.org/10.1007/s11263-023-01977-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11263-023-01977-6

Keywords

Navigation