Advertisement

A Temporally-Aware Interpolation Network for Video Frame Inpainting

  • Ximeng SunEmail author
  • Ryan Szeto
  • Jason J. Corso
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11363)

Abstract

We propose the first deep learning solution to video frame inpainting, a more challenging but less ambiguous task than related problems such as general video inpainting, frame interpolation, and video prediction. We devise a pipeline composed of two modules: a bidirectional video prediction module and a temporally-aware frame interpolation module. The prediction module makes two intermediate predictions of the missing frames, each conditioned on the preceding and following frames respectively, using a shared convolutional LSTM-based encoder-decoder. The interpolation module blends the intermediate predictions, using time information and hidden activations from the video prediction module to resolve disagreements between the predictions. Our experiments demonstrate that our approach produces more accurate and qualitatively satisfying results than a state-of-the-art video prediction method and many strong frame inpainting baselines. Our code is available at https://github.com/sunxm2357/TAI_video_frame_inpainting.

Keywords

Video inpainting Video prediction Frame interpolation 

Notes

Acknowledgements

This work is partly supported by ARO W911NF-15-1-0354, DARPA FA8750-17-2-0112 and DARPA FA8750-16-C-0168. It reflects the opinions and conclusions of its authors, but not the funding agents.

Supplementary material

Supplementary material 1 (mp4 46259 KB)

484517_1_En_16_MOESM2_ESM.pdf (7.2 mb)
Supplementary material 2 (pdf 7379 KB)

References

  1. 1.
    Borzi, A., Ito, K., Kunisch, K.: Optimal control formulation for determining optical flow. SIAM J. Sci. Comput. 24(3), 818–847 (2003)MathSciNetCrossRefGoogle Scholar
  2. 2.
    Chen, K., Lorenz, D.A.: Image sequence interpolation using optimal control. J. Math. Imaging Vis. 41(3), 222–238 (2011)MathSciNetCrossRefGoogle Scholar
  3. 3.
    Cheung, V., Frey, B.J., Jojic, N.: Video epitomes. Int. J. Comput. Vis. 76(2), 141–152 (2008)CrossRefGoogle Scholar
  4. 4.
    Ebdelli, M., Le Meur, O., Guillemot, C.: Video inpainting with short-term windows: application to object removal and error concealment. IEEE Trans. Image Process. 24(10), 3034–3047 (2015)MathSciNetCrossRefGoogle Scholar
  5. 5.
    Granados, M., Kim, K.I., Tompkin, J., Kautz, J., Theobalt, C.: Background inpainting for videos with dynamic objects and a free-moving camera. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012. LNCS, vol. 7572, pp. 682–695. Springer, Heidelberg (2012).  https://doi.org/10.1007/978-3-642-33718-5_49CrossRefGoogle Scholar
  6. 6.
    Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. In: International Conference on Machine Learning, pp. 448–456 (2015)Google Scholar
  7. 7.
    Jia, J., Tai-Pang, W., Tai, Y.W., Tang, C.K.: Video repairing: inference of foreground and background under severe occlusion. In: IEEE Conference on Computer Vision and Pattern Recognition (2004)Google Scholar
  8. 8.
    Jia, Y.T., Hu, S.M., Martin, R.R.: Video completion using tracking and fragment merging. Vis. Comput. 21(8–10), 601–610 (2005)CrossRefGoogle Scholar
  9. 9.
    Kalchbrenner, N., et al.: Video pixel networks. In: International Conference on Machine Learning (2017)Google Scholar
  10. 10.
    Kuehne, H., Jhuang, H., Garrote, E., Poggio, T., Serre, T.: HMDB: a large video database for human motion recognition. In: IEEE International Conference on Computer Vision, pp. 2556–2563 (2011)Google Scholar
  11. 11.
    Liu, Z., Yeh, R., Tang, X., Liu, Y., Agarwala, A.: Video frame synthesis using deep voxel flow. In: International Conference on Computer Vision (ICCV), vol. 2 (2017)Google Scholar
  12. 12.
    Long, G., Kneip, L., Alvarez, J.M., Li, H., Zhang, X., Yu, Q.: Learning image matching by simply watching video. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9910, pp. 434–450. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46466-4_26CrossRefGoogle Scholar
  13. 13.
    Lotter, W., Kreiman, G., Cox, D.: Deep predictive coding networks for video prediction and unsupervised learning. In: International Conference on Learning Representations (2017)Google Scholar
  14. 14.
    Mathieu, M., Couprie, C., LeCun, Y.: Deep multi-scale video prediction beyond mean square error. In: International Conference on Learning Representations (2016)Google Scholar
  15. 15.
    Miyato, T., Kataoka, T., Koyama, M., Yoshida, Y.: Spectral normalization for generative adversarial networks. In: International Conference on Learning Representations (2018)Google Scholar
  16. 16.
    Newson, A., Almansa, A., Fradet, M., Gousseau, Y., Pérez, P.: Video inpainting of complex scenes. SIAM J. Imaging Sci. 7(4), 1993–2019 (2014)MathSciNetCrossRefGoogle Scholar
  17. 17.
    Niklaus, S., Mai, L., Liu, F.: Video frame interpolation via adaptive separable convolution. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 261–270 (2017)Google Scholar
  18. 18.
    Patwardhan, K.A., Sapiro, G., Bertalmío, M.: Video inpainting under constrained camera motion. IEEE Trans. Image Process. 16(2), 545–553 (2007)MathSciNetCrossRefGoogle Scholar
  19. 19.
    Ranzato, M., Szlam, A., Bruna, J., Mathieu, M., Collobert, R., Chopra, S.: Video (language) modeling: a baseline for generative models of natural videos. arXiv preprint arXiv:1412.6604 (2014)
  20. 20.
    Schuldt, C., Laptev, I., Caputo, B.: Recognizing human actions: a local SVM approach. In: International Conference on Pattern Recognition, vol. 3, pp. 32–36 (2004)Google Scholar
  21. 21.
    Shen, Y., Lu, F., Cao, X., Foroosh, H.: Video completion for perspective camera under constrained motion. In: International Conference on Pattern Recognition, vol. 3, pp. 63–66 (2006)Google Scholar
  22. 22.
    Soomro, K., Zamir, A.R., Shah, M.: UCF101: a dataset of 101 human actions classes from videos in the wild. CRCV-TR-12-01 (2012)Google Scholar
  23. 23.
    Srivastava, N., Mansimov, E., Salakhudinov, R.: Unsupervised learning of video representations using LSTMs. In: International Conference On Machine Learning, pp. 843–852 (2015)Google Scholar
  24. 24.
    Villegas, R., Yang, J., Hong, S., Lin, X., Lee, H.: Decomposing motion and content for natural video sequence prediction. In: International Conference on Learning Representations (2017)Google Scholar
  25. 25.
    Wang, Z., Bovik, A.C., Sheikh, H.R., Simoncelli, E.P.: Image quality assessment: from error visibility to structural similarity. IEEE Trans. Image Process. 13(4), 600–612 (2004)CrossRefGoogle Scholar
  26. 26.
    Werlberger, M., Pock, T., Unger, M., Bischof, H.: Optical flow guided TV-L1 video interpolation and restoration. In: International Workshop on Energy Minimization Methods in Computer Vision and Pattern Recognition, pp. 273–286 (2011)Google Scholar
  27. 27.
    Wexler, Y., Shechtman, E., Irani, M.: Space-time video completion. In: IEEE Conference on Computer Vision and Pattern Recognition (2004)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.University of MichiganAnn ArborUSA

Personalised recommendations