Abstract
This work presents a method that can be used to inpaint occluded facial regions with unconstrained pose and orientation. This approach first warps the facial region onto a reference model to synthesize a frontal view. A modified Robust Principal Component Analysis (RPCA) approach is then used to suppress warping errors. It then uses a novel local patch-based face inpainting algorithm which hallucinates missing pixels using a dictionary of face images which are pre-aligned to the same reference model. The hallucinated region is then warped back onto the original image to restore missing pixels.
Experimental results on synthetic occlusions demonstrate that the proposed face inpainting method has the best performance achieving PSNR gains of up to 0.74 dB over the second-best method. Moreover, experiments on the COFW dataset and a number of real-world images show that the proposed method successfully restores occluded facial regions in the wild even for CCTV quality images.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
Frontalization is a terminology recenty introduced in [19] to refer to the process of synthesizing a frontal view of a person whose original pose is unconstrained.
- 2.
The accuracy of the segmentation process is dependent on the number of landmarks used. In this example, one can use more landmark points to segment the lower part of the face region (and possibly the entire occluded region) in segment \(\mathbf {F}\).
- 3.
An inter-eye distance of 40 pixels is sufficient for identification. Nevertheless, this method is not affected by this resolution and higher (or lower) resolutions can be configured.
- 4.
These pixels are known in the dictionary \(\mathbf {D}_p^{u}\) since the training images do not have occlusions. However, these pixels are collocated with the unknown pixels within the patch being inpainted \(\varPsi _p\).
- 5.
Subjects wearing glasses were removed since we want to use the training images to synthesize people without facial occlusions.
- 6.
Face Inpainting Demo: https://goo.gl/ws3NG4.
- 7.
The images provided by Burgos-Artizzu et al. [18] were in grayscale, and therefore only results on grayscale images are presented here.
References
Hancock, P.J., Bruce, V., Burton, A.: Recognition of unfamiliar faces. Trends Cogn. Sci. 4, 330–337 (2000)
Terry, R.L.: How wearing eyeglasses affects facial recognition. Curr. Psychol. 12, 151–162 (1993)
Yarmey, A.D.: Eyewitness recall and photo identification: a field experiment. Psychol. Crime Law 10, 53–68 (2004)
Klare, B.F., Klein, B., Taborsky, E., Blanton, A., Cheney, J., Allen, K., Grother, P., Mah, A., Burge, M., Jain, A.K.: Pushing the frontiers of unconstrained face detection and recognition: IARPA Janus benchmark A. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1931–1939 (2015)
Bertalmio, M., Sapiro, G., Caselles, V., Ballester, C.: Image inpainting. In: Annual Conference on Computer Graphics and Interactive Techniques, pp. 417–424 (2000)
Criminisi, A., Perez, P., Toyama, K.: Region filling and object removal by exemplar-based image inpainting. IEEE Trans. Image Process. 13, 1200–1212 (2004)
Liu, J., Musialski, P., Wonka, P., Ye, J.: Tensor completion for estimating missing values in visual data. IEEE Trans. Pattern Anal. Mach. Intell. 35, 208–220 (2013)
Guillemot, C., Meur, O.L.: Image inpainting : overview and recent advances. IEEE Sig. Process. Mag. 31, 127–144 (2014)
Mo, Z., Lewis, J., Neumann, U.: Face inpainting with local linear representations. In: British Machine Vision Conference, pp. 37.1–37.10 (2004)
Wang, Z.M., Tao, J.H.: Reconstruction of partially occluded face by fast recursive PCA. In: International Conference on Computational Intelligence and Security Workshops, pp. 304–307 (2007)
Zhou, Z., Wagner, A., Mobahi, H., Wright, J., Ma, Y.: Face recognition with contiguous occlusion using markov random fields. In: IEEE International Conference on Computer Vision, pp. 1050–1057 (2009)
Lin, D., Tang, X.: Quality-driven face occlusion detection and recovery. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–7 (2007)
Hosoi, T., Nagashima, S., Kobayashi, K., Ito, K., Aoki, T.: Restoring occluded regions using FW-PCA for face recognition. In: IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 23–30 (2012)
Hwang, B.W., Lee, S.W.: Reconstruction of partially damaged face images based on a morphable face model. IEEE Trans. Pattern Anal. Mach. Intell. 25, 365–372 (2003)
Liwicki, S., Tzimiropoulos, G., Zafeiriou, S., Pantic, M.: Euler principal component analysis. Int. J. Comput. Vis. 101, 498–518 (2012)
Min, R., Dugelay, J.L.: Inpainting of sparse occlusion in face recognition. In: IEEE International Conference on Image Processing, pp. 1425–1428 (2012)
Roth, S., Black, M.J.: Fields of experts. Int. J. Comput. Vis. 82, 205–229 (2009)
Burgos-Artizzu, X.P., Zepeda, J., Clerc, F.L., Perez, P.: Pose and expression-coherent face recovery in the wild. In: IEEE International Conference on Computer Vision Workshop, pp. 877–885 (2015)
Hassner, T., Harel, S., Paz, E., Enbar, R.: Effective face frontalization in unconstrained images. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 4295–4304 (2015)
Candès, E.J., Li, X., Ma, Y., Wright, J.: Robust principal component analysis? J. ACM 58, 11:1–11:37 (2011)
Yuan, Z., Xie, X., Ma, X., Lam, K.M.: Color facial image denoising based on RPCA and noisy pixel detection. In: IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 2449–2453 (2013)
Peng, Y., Ganesh, A., Wright, J., Xu, W., Ma, Y.: RASL: robust alignment by sparse and low-rank decomposition for linearly correlated images. IEEE Trans. Pattern Anal. Mach. Intell. 34, 2233–2246 (2012)
Wong, A., Orchard, J.: A nonlocal-means approach to exemplar-based inpainting. In: IEEE International Conference on Image Processing, pp. 2600–2603 (2008)
Xu, Z., Sun, J.: Image inpainting by patch propagation using patch sparsity. IEEE Trans. Image Process. 19, 1153–1165 (2010)
Studer, C., Kuppinger, P., Pope, G., Bolcskei, H.: Recovery of sparsely corrupted signals. IEEE Trans. Inf. Theor. 58, 3115–3130 (2012)
Guillemot, C., Turkan, M., Meur, O.L., Ebdelli, M.: Object removal and loss concealment using neighbor embedding methods. Sig. Process. Image Commun. 28, 1405–1419 (2013)
Ma, X., Zhang, J., Qi, C.: Position-based face hallucination method. In: Proceedings of the IEEE International Conference on Multimedia and Expo, pp. 290–293 (2009)
Jiang, J., Hu, R., Wang, Z., Han, Z.: Face super-resolution via multilayer locality-constrained iterative neighbor embedding and intermediate dictionary learning. IEEE Trans. Image Process. 23, 4220–4231 (2014)
Burgos-Artizzu, X.P., Perona, P., Dollár, P.: Robust face landmark estimation under occlusion. In: IEEE International Conference on Computer Vision, pp. 1513–1520 (2013)
Mairal, J., Bach, F., Ponce, J., Sapiro, G.: Online dictionary learning for sparse coding. In: Annual International Conference on Machine Learning, pp. 689–696 (2009)
Zhu, X., Ramanan, D.: Face detection, pose estimation, and landmark localization in the wild. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 2879–2886 (2012)
Koestinger, M., Wohlhart, P., Roth, P.M., Bischof, H.: Annotated facial landmarks in the wild: a large-scale, real-world database for facial landmark localization. In: IEEE International Conference on Computer Vision, pp. 2144–2151 (2011)
Roweis, S.T., Saul, L.K.: Nonlinear dimensionality reduction by locally linear embedding. Science 290, 2323–2326 (2000)
Belkin, M., Niyogi, P.: Laplacian eigenmaps for dimensionality reduction and data representation. Neural Comput. 15, 1373–1396 (2003)
He, X., Yan, S., Hu, Y., Niyogi, P., Zhang, H.J.: Face recognition using laplacianfaces. IEEE Trans. Pattern Anal. Mach. Intell. 27, 328–340 (2005)
Hu, C., Chang, Y., Feris, R., Turk, M.: Manifold based analysis of facial expression. In: IEEE International Conference on Computer Vision and Pattern Recognition, p. 81 (2004)
Lin, M., Chen, L., Wu, Y.M.: The augmented lagrange multiplier method for exact recovery of corrupted low-rank matrices. Technical report, University of Illinois (2009)
Phillips, P.J., Wechsler, H., Huang, J., Rauss, P.J.: The FERET database and evaluation procedure for face-recognition algorithms. Image Vis Comput. 16, 295–306 (1998)
Huang, G.B., Ramesh, M., Berg, T., Learned-Miller, E.: Labeled faces in the wild: a database for studying face recognition in unconstrained environments. Technical report 07–49, University of Massachusetts, Amherst (2007)
Martinez, A.M., Benavente, R.: The AR Face Database. Technical report, CVC (1998)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this paper
Cite this paper
Farrugia, R.A., Guillemot, C. (2017). Model and Dictionary Guided Face Inpainting in the Wild. In: Chen, CS., Lu, J., Ma, KK. (eds) Computer Vision – ACCV 2016 Workshops. ACCV 2016. Lecture Notes in Computer Science(), vol 10116. Springer, Cham. https://doi.org/10.1007/978-3-319-54407-6_5
Download citation
DOI: https://doi.org/10.1007/978-3-319-54407-6_5
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-54406-9
Online ISBN: 978-3-319-54407-6
eBook Packages: Computer ScienceComputer Science (R0)