Generative Image Inpainting for Person Pose Generation
Filling large missing regions of an image with semantically meaningful and visually coherent information is a challenging task. Traditional approaches use textures from surrounding pixels for filling in the missing patches and are suitable for filling the missing data with a generic background. The dataset provided in the ECCV’18 Satellite Workshop Chalearn LAP Inpainting Competition Track 1-Inpainting of still images of humans contains masked images with randomly placed square masks, occluding up to 70% of the image data. The masks are human-centered. In order to inpaint the human-centric images, we use a generative model which can generate patches that do not appear anywhere in the image. This allows it to complete images with detailed features like faces. Our model can inpaint images with multiple holes of different sizes at various locations and can handle a wide variety of scenes. Our model produces decent results in reconstructing not only the occluded human parts but also the background. Most of our work is inspired by previous work on Image Generation and Image Inpainting.
- 1.Jiahui Yu, Zhe Lin, Jimei Yang, Xiaohui Shen, Xin Lu, and Thomas S Huang. Generative image inpainting with contextual attention. arXiv preprint, 2018.Google Scholar
- 2.Satoshi Iizuka, Edgar Simo-Serra, and Hiroshi Ishikawa. Globally and locally consistent image completion. ACM Transactions on Graphics (TOG), 36(4):107, 2017.Google Scholar
- 3.Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. Generative adversarial nets. In Advances in neural information processing systems, pages 2672–2680, 2014.Google Scholar
- 4.Justin Johnson, Alexandre Alahi, and Li Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. In European Conference on Computer Vision, pages 694–711. Springer, 2016.Google Scholar