Advertisement

Symmetry-Aware Face Completion with Generative Adversarial Networks

  • Jiawan Zhang
  • Rui Zhan
  • Di SunEmail author
  • Gang Pan
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11364)

Abstract

Face completion is a challenging task in computer vision. Unlike general images, face images usually have strong semantic correlation and symmetry. Without taking these characteristics into account, existing face completion techniques usually fail to produce a photo-realistic result, especially for the missing key components (e.g., eyes and mouths). In this paper, we propose a symmetry-aware face completion method based on facial structural features using a deep generative model. The model is trained with a combination of a reconstruction loss, a structure loss, two adversarial losses and a symmetry loss, which ensures pixel faithfulness, local-global contents integrity and symmetrical consistency. We conduct a dedicated symmetry detection technique for facial components and show that the symmetrical attention module significantly improves face completion results. Experiments show that our method is capable of synthesizing semantically valid and visually plausible contents for the missing facial key parts from random mask. In addition, our model outperforms other methods for detail completion of facial components.

Keywords

Face completion GAN Symmetry Image inpainting 

References

  1. 1.
    Afifi, M., Hussain, K.F.: MPB: a modified poisson blending technique. Comput. Vis. Media 1(4), 331–341 (2015)CrossRefGoogle Scholar
  2. 2.
    Barnes, C., Shechtman, E., Finkelstein, A., Goldman, D.B.: PatchMatch: a randomized correspondence algorithm for structural image editing. ACM Trans. Graph. (ToG) 28(3), 24 (2009)CrossRefGoogle Scholar
  3. 3.
    Bertalmio, M., Sapiro, G., Caselles, V., Ballester, C.: Image inpainting. In: Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques, pp. 417–424. ACM Press/Addison-Wesley Publishing Co. (2000)Google Scholar
  4. 4.
    Bitouk, D., Kumar, N., Dhillon, S., Belhumeur, P., Nayar, S.K.: Face swapping: automatically replacing faces in photographs. ACM Trans. Graph. (TOG) 27, 39 (2008)CrossRefGoogle Scholar
  5. 5.
    Deng, Y., Dai, Q., Zhang, Z.: Graph Laplace for occluded face completion and recognition. IEEE Trans. Image Process. 20(8), 2329–2338 (2011)MathSciNetCrossRefGoogle Scholar
  6. 6.
    Deng, Y., Li, D., Xie, X., Lam, K.M., Dai, Q.: Partially occluded face completion and recognition. In: 2009 16th IEEE International Conference on Image Processing (ICIP), pp. 4145–4148. IEEE (2009)Google Scholar
  7. 7.
    Dolhansky, B., Ferrer, C.C.: Eye in-painting with exemplar generative adversarial networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7902–7911 (2018)Google Scholar
  8. 8.
    Elad, M., Starck, J.L., Querre, P., Donoho, D.L.: Simultaneous cartoon and texture image inpainting using morphological component analysis (MCA). Appl. Comput. Harmonic Anal. 19(3), 340–358 (2005)MathSciNetCrossRefGoogle Scholar
  9. 9.
    Goodfellow, I., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, pp. 2672–2680 (2014)Google Scholar
  10. 10.
    Hays, J., Efros, A.A.: Scene completion using millions of photographs. ACM Trans. Graph. (TOG) 26, 4 (2007)CrossRefGoogle Scholar
  11. 11.
    Iizuka, S., Simo-Serra, E., Ishikawa, H.: Globally and locally consistent image completion. ACM Trans. Graph. (TOG) 36(4), 107 (2017)CrossRefGoogle Scholar
  12. 12.
    Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. arXiv preprint (2017)Google Scholar
  13. 13.
    Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
  14. 14.
    Köhler, R., Schuler, C., Schölkopf, B., Harmeling, S.: Mask-specific inpainting with deep neural networks. In: Jiang, X., Hornegger, J., Koch, R. (eds.) GCPR 2014. LNCS, vol. 8753, pp. 523–534. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-11752-2_43CrossRefGoogle Scholar
  15. 15.
    Li, Y., Liu, S., Yang, J., Yang, M.H.: Generative face completion. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 1, p. 3 (2017)Google Scholar
  16. 16.
    Liu, G., Reda, F.A., Shih, K.J., Wang, T.C., Tao, A., Catanzaro, B.: Image inpainting for irregular holes using partial convolutions. arXiv preprint arXiv:1804.07723 (2018)
  17. 17.
    Liu, P., Qi, X., He, P., Li, Y., Lyu, M.R., King, I.: Semantically consistent image completion with fine-grained details. arXiv preprint arXiv:1711.09345 (2017)
  18. 18.
    Mirza, M., Osindero, S.: Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784 (2014)
  19. 19.
    Mo, Z., Lewis, J.P., Neumann, U.: Face inpainting with local linear representations. In: BMVC, vol. 1, p. 2 (2004)Google Scholar
  20. 20.
    Pathak, D., Krahenbuhl, P., Donahue, J., Darrell, T., Efros, A.A.: Context encoders: feature learning by inpainting. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2536–2544 (2016)Google Scholar
  21. 21.
    Radford, A., Metz, L., Chintala, S.: Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434 (2015)
  22. 22.
    Ren, J.S., Xu, L., Yan, Q., Sun, W.: Shepard convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 901–909 (2015)Google Scholar
  23. 23.
    Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234–241. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-24574-4_28CrossRefGoogle Scholar
  24. 24.
    Saito, Y., Kenmochi, Y., Kotani, K.: Estimation of eyeglassless facial images using principal component analysis. In: Proceedings of the 1999 International Conference on Image Processing, ICIP 1999, vol. 4, pp. 197–201. IEEE (1999)Google Scholar
  25. 25.
    Viola, P., Jones, M.: Rapid object detection using a boosted cascade of simple features. In: Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2001, vol. 1, p. I. IEEE (2001)Google Scholar
  26. 26.
    Xie, J., Xu, L., Chen, E.: Image denoising and inpainting with deep neural networks. In: Advances in Neural Information Processing Systems, pp. 341–349 (2012)Google Scholar
  27. 27.
    Yang, C., Lu, X., Lin, Z., Shechtman, E., Wang, O., Li, H.: High-resolution image inpainting using multi-scale neural patch synthesis. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 1, p. 3 (2017)Google Scholar
  28. 28.
    Yeh, R.A., Chen, C., Lim, T.Y., Schwing, A.G., Hasegawa-Johnson, M., Do, M.N.: Semantic image inpainting with deep generative models. In: CVPR, vol. 2, p. 4 (2017)Google Scholar
  29. 29.
    Yu, F., Koltun, V.: Multi-scale context aggregation by dilated convolutions. arXiv preprint arXiv:1511.07122 (2015)
  30. 30.
    Yu, J., Lin, Z., Yang, J., Shen, X., Lu, X., Huang, T.S.: Generative image inpainting with contextual attention. arXiv preprint (2018)Google Scholar
  31. 31.
    Zhang, S., He, R., Sun, Z., Tan, T.: Multi-task convnet for blind face inpainting with application to face verification. In: 2016 International Conference on Biometrics (ICB), pp. 1–8. IEEE (2016)Google Scholar
  32. 32.
    Zhang, S., He, R., Sun, Z., Tan, T.: DeMeshNet: blind face inpainting for deep MeshFace verification. IEEE Trans. Inf. Forensics Secur. 13(3), 637–647 (2018)CrossRefGoogle Scholar
  33. 33.
    Zhang, W., Shan, S., Chen, X., Gao, W.: Local Gabor binary patterns based on Kullback–Leibler divergence for partially occluded face recognition. IEEE Signal Process. Lett. 14(11), 875–878 (2007)CrossRefGoogle Scholar
  34. 34.
    Zhuang, Y.T., Wang, Y.S., Shih, T.K., Tang, N.C.: Patch-guided facial image inpainting by shape propagation. J. Zhejiang Univ.-SCIENCE A 10(2), 232–238 (2009)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Tianjin UniversityTianjinChina
  2. 2.Tianjin University of Science and TechnologyTianjinChina

Personalised recommendations