Advertisement

Multi-level Motion-Informed Approach for Video Generation with Key Frames

  • Zackary P. T. SinEmail author
  • Peter H. F. Ng
  • Simon C. K. Shiu
  • Fu-lai Chung
  • Hong Va Leong
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11542)

Abstract

Observing that a motion signal is decomposable into multiple levels, a video generation model which realizes this hypothesis is proposed. The model decomposes motion into a two-level signal involving a global path and local pattern. They are modeled via a latent path in the form of a composite Bezier spline along with a latent sine function respectively. In the application context, the model fills the research gap in its ability to connect an arbitrary number of input key frames smoothly. Experimental results indicate that the model improves in terms of the smoothness of the generated video. In addition, the ability of the model in separating global and local signal has been validated.

Keywords

Global motion path Local motion pattern Video generation with key frames Latent path Periodic latent function 

References

  1. 1.
    Tulyakov, S., Liu, M.-Y., Yang, X., Kautz, J.: MoCoGAN: decomposing motion and content for video generation. In: CVPR Workshop (2017)Google Scholar
  2. 2.
    Kingma, D.P., Welling, M.: Auto-encoding variational bayes. In: Proceedings of ICLR (2013)Google Scholar
  3. 3.
    Goodfellow, I.J., et al.: Generative adversarial nets. In: Proceedings of NIPS (2014)Google Scholar
  4. 4.
    Vondrick, C., Pirsiavash, H., Torralba, A.: Generating videos with scene dynamics. In: Proceedings of NIPS (2016)Google Scholar
  5. 5.
    Saito, M., Matsumoto, E., Saito, S.: Temporal generative adversarial nets with singular value clipping. In: Proceedings of ICCV (2017)Google Scholar
  6. 6.
    Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)CrossRefGoogle Scholar
  7. 7.
    Mathieu, M., Couprie, C., LeCun, Y.: Deep multi-scale video prediction beyond mean square error. In: Proceedings of ICLR (2016)Google Scholar
  8. 8.
    Walker, J., Marino, K., Gupta, A., Hebert, M.: Video forecasting by generating pose futures. In: Proceedings of ICCV (2017)Google Scholar
  9. 9.
    Liang, X., Lee, L., Dai, W., Xing, E.P.: Dual motion GAN for future-flow embedded video prediction. In: Proceedings of ICCV (2017)Google Scholar
  10. 10.
    Liu, Z., Yeh, R.A., Tang, X., Liu, Y., Agarwala, A.: Video frame synthesis using deep voxel flow. In: Proceedings of ICCV (2017)Google Scholar
  11. 11.
    Chan, C., Ginosar, S., Zhou, T., Efros, A.A.: Everybody dance now. In: ECCV Workshop (2018)Google Scholar
  12. 12.
    Wang, T.-C., et al.: Video-to-video synthesis. In: Proceedings of NIPS (2018)Google Scholar
  13. 13.
    Cho, K., et al.: Learning phrase representations using RNN encoder-decoder for statistical machine translation. In: Proceedings of International Conference on Empirical Methods in NLP (2014)Google Scholar
  14. 14.
    Schuldt, C., Laptev, I., Caputo, B.: Recognizing human actions: a local SVM approach. In: Proceedings of International Conference on Pattern Recognition (2004)Google Scholar
  15. 15.
    Gorelick, L., Blank, M., Shechtman, E., Irani, M., Basri, R.: Actions as space-time shapes. In: Proceedings of ICCV (2005)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Zackary P. T. Sin
    • 1
    Email author
  • Peter H. F. Ng
    • 1
  • Simon C. K. Shiu
    • 1
  • Fu-lai Chung
    • 1
  • Hong Va Leong
    • 1
  1. 1.The Hong Kong Polytechnic UniversityHung HomHong Kong

Personalised recommendations