Identifying Vandalized Regions in Facial Images of Statues for Inpainting

  • Milind G. Padalkar
  • Manali V. Vora
  • Manjunath V. Joshi
  • Mukesh A. Zaveri
  • Mehul S. Raval
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8158)

Abstract

Historical monuments are considered as one of the key aspects for modern communities. Unfortunately, due to a variety of factors the monuments get damaged. One may think of digitally undoing the damage to the monuments by inpainting, a process to fill-in missing regions in an image. A majority of inpainting techniques reported in the literature require manual selection of the regions to be inpainted. In this paper, we propose a novel method that automates the process of identifying the damage to visually dominant regions viz. eyes, nose and lips in face image of statues, for the purpose of inpainting. First, a bilateral symmetry based method is used to identify the eyes, nose and lips. Textons features are then extracted from each of these regions in a multi-resolution framework to characterize both the regular and irregular textures. These textons are matched with those extracted from a training set of true vandalized and non-vandalized regions, in order to classify the region under consideration. If the region is found to be vandalized, the best matching non-vandalized region from the training set is used to inpaint the identified region using the Poisson image editing method. Experiments conducted on face images of statues downloaded from the Internet, give promising results.

Keywords

texton inpainting vandalism damage-detection heritage 

References

  1. 1.
    Bertalmio, M., Vese, L., Sapiro, G., Osher, S.: Simultaneous structure and texture image inpainting. Trans. Img. Proc. 12(8), 882–889 (2003)CrossRefGoogle Scholar
  2. 2.
    Bertalmio, M., Sapiro, G., Caselles, V., Ballester, C.: Image inpainting. In: Computer graphics (SIGGRAPH 2000), pp. 417–424 (2000)Google Scholar
  3. 3.
    Criminisi, A., Pérez, P., Toyama, K.: Region filling and object removal by exemplar-based image inpainting. Trans. Img. Proc. 13(9), 1200–1212 (2004)CrossRefGoogle Scholar
  4. 4.
    Emerson, C., Lam, N., Quattrochi, D.: Multi-scale fractal analysis of image texture and patterns. Photogrammetric Engg. and Remote Sensing 65(1), 51–62 (1999)Google Scholar
  5. 5.
    Google Images (March 2012), http://www.images.google.com
  6. 6.
    Hays, J., Efros, A.A.: Scene completion using millions of photographs. ACM Transactions on Graphics (SIGGRAPH 2007) 26(3) (2007)Google Scholar
  7. 7.
    Katahara, S., Aoki, M.: Face parts extraction windows based on bilateral symmetry of gradient direction. In: Solina, F., Leonardis, A. (eds.) CAIP 1999. LNCS, vol. 1689, pp. 489–497. Springer, Heidelberg (1999)CrossRefGoogle Scholar
  8. 8.
    Legrand, P.: Local regularity and multifractal methods for image and signal analysis. In: Scaling, Fractals and Wavelets. Wiley (January 2009)Google Scholar
  9. 9.
    Levin, A., Zomet, A., Weiss, Y.: Learning how to inpaint from global image statistics. In: Int. Conf. on Computer Vision, vol. 1, pp. 305–312 (October 2003)Google Scholar
  10. 10.
    Masnou, S., Morel, J.M.: Level lines based disocclusion. In: Int. Conf. on Image Processing, vol. 3, pp. 259–263 (October 1998)Google Scholar
  11. 11.
    Parmar, C.M., Joshi, M.V., Raval, M.S., Zaveri, M.A.: Automatic image inpainting for the facial images of monuments. In: Proceedings of Electrical Engineering Centenary Conference 2011, December 14-17, pp. 415–420 (2011)Google Scholar
  12. 12.
    Pérez, P., Gangnet, M., Blake, A.: Poisson image editing. In: ACM SIGGRAPH 2003 Papers, SIGGRAPH 2003, pp. 313–318 (2003)Google Scholar
  13. 13.
    Tibshirani, R., Walther, G., Hastie, T.: Estimating the number of clusters in a data set via the gap statistic. Journal of Royal Stat. Soc., B 63(2), 411–423 (2001)MathSciNetCrossRefMATHGoogle Scholar
  14. 14.
    Varma, M., Zisserman, A.: A statistical approach to material classification using image patch exemplars. IEEE Trans. PAMI 31(11), 2032–2047 (2009)CrossRefGoogle Scholar
  15. 15.
    Varma, M., Zisserman, A.: Classifying images of materials: Achieving viewpoint and illumination independence. In: Heyden, A., Sparr, G., Nielsen, M., Johansen, P. (eds.) ECCV 2002, Part III. LNCS, vol. 2352, pp. 255–271. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  16. 16.
    Štruc, V., Pavešić, N.: Illumination invariant face recognition by non-local smoothing. In: Proceedings of the BioID MultiComm., pp. 1–8 (2009)Google Scholar
  17. 17.
    Weickert, J.: Theoretical foundations of anisotropic diffusion in image processing. Computing, Suppl. 11, 221–236 (1996)CrossRefGoogle Scholar
  18. 18.
    Whyte, O., Sivic, J., Zisserman, A.: Get out of my picture! internet-based inpainting. In: Proceedings of the 20th British Machine Vision Conference, London (2009)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Milind G. Padalkar
    • 1
  • Manali V. Vora
    • 1
  • Manjunath V. Joshi
    • 1
  • Mukesh A. Zaveri
    • 2
  • Mehul S. Raval
    • 1
  1. 1.Dhirubhai Ambani Institute of Information and Communication TechnologyGandhinagarIndia
  2. 2.Sardar Vallabhbhai National Institute of TechnologySuratIndia

Personalised recommendations