Advertisement

ChaLearn Looking at People: Inpainting and Denoising Challenges

  • Sergio EscaleraEmail author
  • Martí Soler
  • Stephane Ayache
  • Umut Güçlü
  • Jun Wan
  • Meysam Madadi
  • Xavier Baró
  • Hugo Jair Escalante
  • Isabelle Guyon
Conference paper
Part of the The Springer Series on Challenges in Machine Learning book series (SSCML)

Abstract

Dealing with incomplete information is a well studied problem in the context of machine learning and computational intelligence. However, in the context of computer vision, the problem has only been studied in specific scenarios (e.g., certain types of occlusions in specific types of images), although it is common to have incomplete information in visual data. This chapter describes the design of an academic competition focusing on inpainting of images and video sequences that was part of the competition program of WCCI2018 and had a satellite event collocated with ECCV2018. The ChaLearn Looking at People Inpainting Challenge aimed at advancing the state of the art on visual inpainting by promoting the development of methods for recovering missing and occluded information from images and video. Three tracks were proposed in which visual inpainting might be helpful but still challenging: human body pose estimation, text overlays removal and fingerprint denoising. This chapter describes the design of the challenge, which includes the release of three novel datasets, and the description of evaluation metrics, baselines and evaluation protocol. The results of the challenge are analyzed and discussed in detail and conclusions derived from this event are outlined.

Notes

Acknowledgements

The sponsors of ChaLearn Looking at People inpainting and denoising events are Google, ChaLearn, Amazon, and Disney Research. This work has been partially supported by the Spanish project TIN2016-74946-P (MINECO/FEDER, UE) and CERCA Programme/Generalitat de Catalunya. This work was also partially funded by the French national research agency (grant number ANR16-CE23-0006). We gratefully acknowledge the support of NVIDIA Corporation with the donation of the GPU used for this research. This work is partially supported by ICREA under the ICREA Academia programme. We thank all challenge participants for their excellent contributions.

References

  1. 1.
    Hours of video uploaded to youtube every minute as of july 2015. https://www.statista.com/statistics/259477/hours-of-video-uploaded-to-youtube-every-minute/, 2019.
  2. 2.
    Mykhaylo Andriluka, Leonid Pishchulin, Peter Gehler, and Bernt Schiele. 2d human pose estimation: New benchmark and state of the art analysis. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2014.Google Scholar
  3. 3.
    James Charles, Tomas Pfister, Derek R Magee, David C Hogg, and Andrew Zisserman. Domain adaptation for upper body pose tracking ian signed tv broadcasts. In BMVC, 2013.Google Scholar
  4. 4.
    Chao Dong, Chen Change Loy, Kaiming He, and Xiaoou Tang. Image super-resolution using deep convolutional networks. IEEE transactions on pattern analysis and machine intelligence, 38(2):295–307, 2016.Google Scholar
  5. 5.
    Marcin Eichner and Vittorio Ferrari. Human pose co-estimation and applications. IEEE transactions on pattern analysis and machine intelligence, 34(11):2282–2288, 2012.CrossRefGoogle Scholar
  6. 6.
    Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.Google Scholar
  7. 7.
    Viren Jain and Sebastian Seung. Natural image denoising with convolutional networks. In Advances in Neural Information Processing Systems, pages 769–776, 2009.Google Scholar
  8. 8.
    Huaizu Jiang, Deqing Sun, Varun Jampani, Ming-Hsuan Yang, Erik G. Learned-Miller, and Jan Kautz. Super slomo: High quality estimation of multiple intermediate frames for video interpolation. In 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18–22, 2018, pages 9000–9008, 2018.Google Scholar
  9. 9.
    Sam Johnson and Mark Everingham. Clustered pose and nonlinear appearance models for human pose estimation. In Proceedings of the British Machine Vision Conference, 2010. https://doi.org/10.5244/C.24.12.
  10. 10.
    Zhengying Liu, Olivier Bousquet, André Elisseeff, Sergio Escalera, Isabelle Guyon, Julio Jacques Jr., Adrien Pavao, Danny Silver, Lisheng Sun-Hosoya, Sebastien Treguer, Wei-Wei Tu, Jingsong Wang, and Quanming Yao. Autodl challenge design and beta tests: towards automatic deep learning. In Submitted to NIPS Workshop on Meta-Learning, 2018.Google Scholar
  11. 11.
    Xiao-Jiao Mao, Chunhua Shen, and Yu-Bin Yang. Image restoration using convolutional auto-encoders with symmetric skip connections. arXiv preprint arXiv:1606.08921, 2016.Google Scholar
  12. 12.
    Alejandro Newell, Kaiyu Yang, and Jia Deng. Stacked hourglass networks for human pose estimation. In European Conference on Computer Vision, pages 483–499. Springer, 2016.Google Scholar
  13. 13.
    Alasdair Newson, Andrés Almansa, Matthieu Fradet, Yann Gousseau, and Patrick Pérez. Video inpainting of complex scenes. SIAM Journal on Imaging Sciences, 7(4):1993–2019, 2014.MathSciNetCrossRefGoogle Scholar
  14. 14.
    Deepak Pathak, Philipp Krähenbühl, Jeff Donahue, Trevor Darrell, and Alexei Efros. Context encoders: Feature learning by inpainting. In Computer Vision and Pattern Recognition (CVPR), 2016.Google Scholar
  15. 15.
    Benjamin Sapp and Ben Taskar. Modec: Multimodal decomposable models for human pose estimation. In In Proc. CVPR, 2013.Google Scholar
  16. 16.
    Zhou Wang, Alan C Bovik, Hamid R Sheikh, and Eero P Simoncelli. Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing, 13(4):600–612, 2004.Google Scholar
  17. 17.
    Junyuan Xie, Linli Xu, and Enhong Chen. Image denoising and inpainting with deep neural networks. In Advances in neural information processing systems, pages 341–349, 2012.Google Scholar
  18. 18.
    Li Xu, Jimmy SJ Ren, Ce Liu, and Jiaya Jia. Deep convolutional neural network for image deconvolution. In Advances in Neural Information Processing Systems, pages 1790–1798, 2014.Google Scholar
  19. 19.
    Chao Yang, Xin Lu, Zhe Lin, Eli Shechtman, Oliver Wang, and Hao Li. High-resolution image inpainting using multi-scale neural patch synthesis. In The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), July 2017.Google Scholar
  20. 20.
    Raymond A. Yeh, Chen Chen, Teck Yian Lim, Schwing Alexander G., Mark Hasegawa-Johnson, and Minh N. Do. Semantic image inpainting with deep generative models. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017. equal contribution.Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Sergio Escalera
    • 1
    Email author
  • Martí Soler
    • 2
  • Stephane Ayache
    • 3
  • Umut Güçlü
    • 4
  • Jun Wan
    • 5
  • Meysam Madadi
    • 6
  • Xavier Baró
    • 7
  • Hugo Jair Escalante
    • 8
  • Isabelle Guyon
    • 9
    • 10
  1. 1.Department of Mathematics & InformaticsUniversitat de Barcelona, Computer Vision CenterBarcelonaSpain
  2. 2.Universitat de BarcelonaBarcelonaSpain
  3. 3.Aix-Marseille UniversityMarseilleFrance
  4. 4.Radboud University NijmegenNijmegenThe Netherlands
  5. 5.Institute of AutomationChinese Academy of SciencesBeijingChina
  6. 6.Computer Vision CenterAutonomous University of Barcelona, BellaterraBarcelonaSpain
  7. 7.Open University of CataloniaBarcelonaSpain
  8. 8.INAOEPueblaMexico
  9. 9.Université Paris-SaclayParisFrance
  10. 10.ChaLearnBerkeleyUSA

Personalised recommendations