Skip to main content

Video Restoration Using Convolutional Neural Networks for Low-Level FPGAs

  • Conference paper
  • First Online:
Knowledge Science, Engineering and Management (KSEM 2018)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 11062))

  • 1428 Accesses

Abstract

Deep convolutional neural networks (CNNs) have attracted wide attentions for video restoration in the last few years. Due to enormous computational complexity of deep CNNs, implementations on high-level FPGAs have been proposed to achieve the power efficient solutions. However, low-end devices, such as mobile devices and low-level FPGAs, have very limited processing capabilities, such as limited logic gates and memory bandwidth. In this paper, we propose a power-efficient design of CNNs for implementation on low-level FPGAs for near real-time video frame restoration. Specifically, our video restoration method reduces the model parameters by analyzing the network hyper-parameters. Fixed-point quantization is adopted during the training process to improve the processing frame rate while retaining the PSNR quality. Hence, the computational requirement of the proposed CNNs is alleviated for implementation using OpenCL framework on a low-level FPGA with only 85K logic gates. Experimental results show that the proposed FPGA platform consumes more than 8 times less power than the CPU and GPU implementations.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Dong, C., Loy, C.C., He, K., Tang, X.: Image superresolution using deep convolutional networks. TPAMI 38(2), 295–307 (2015)

    Article  Google Scholar 

  2. Kim, J., Lee, J.K., Lee, K.M.: Accurate image superresolution using very deep convolutional networks. In: CVPR (2016)

    Google Scholar 

  3. Kim, J., Lee, J.K., Lee, K.M.: Deeply-recursive convolutional network for image super-resolution. In: CVPR (2016)

    Google Scholar 

  4. Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate superresolution. In: CVPR (2017)

    Google Scholar 

  5. Tai, Y., Yang, J., Liu, X., Xu, C.: Memnet: a persistent memory network for image restoration. In: ICCV (2017)

    Google Scholar 

  6. Lim, B., Son, S., Kim, H., Nah, S., Lee, K.M.: Enhanced deep residual networks for single image super-resolution. In: CVPR (2017)

    Google Scholar 

  7. Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR (2018)

    Google Scholar 

  8. Wu, J., Leng, C., Wang, Y., Hu, Q., Cheng, J.: Quantized convolutional neural networks for mobile devices. In: CVPR (2016)

    Google Scholar 

  9. Chen, X., Hu, X., Xu, N.: FxpNet: training a deep convolutional neural network in fixed-point representation. In: IJCNN (2017)

    Google Scholar 

  10. Lai, L., Suda, N., Chandra, V.: Deep convolutional neural network inference with floating-point weights and fixed-point activations. In: CVPR (2017)

    Google Scholar 

  11. Lin, D.D., Talathi, S.S., Sreekanth Annapureddy, V.: Fixed point quantization of deep convolutional networks. In: ICML (2016)

    Google Scholar 

  12. Zhang, C., Li, P., Sun, G., Guan, Y., Xiao, B., Cong, J.: Optimizing FPGA-based accelerator design for deep convolutional neural networks. In: FPGA (2015)

    Google Scholar 

  13. Suda, N., Chandra, V., Dasika, G., Mohanty, A., Ma, Y.: Throughout-optimized OpenCL-based FPGA accelerator for large-scale convolutional neural networks. In: FPGA (2016)

    Google Scholar 

  14. Motamedi, M., Gysel, P., Akella, V., Ghiasi, S.: Design space exploration of FPGA-based deep convolutional neural networks. In: ASP-DAC (2016)

    Google Scholar 

  15. Pouchet, L., Zhang, P., Sadayappan, P., Cong, J.: Polyhedral-based data reuse optimization for configurable computing. In: FPGA (2013)

    Google Scholar 

  16. Hubara, I., Courbariaux, M., Soudry, D., EI-Yaniv, R., Bengio, Y.: Quantized neural networks: training neural networks with low precision weights and activations. arXiv preprint arXiv: 1609.07061 (2016)

    Google Scholar 

  17. Yang, C., Lu, X., Lin, Z., Shechtman, E., Wang, O., Li, H.: High-resolution image inpainting using multi-scale neural patch synthesis. In: CVPR (2017)

    Google Scholar 

  18. Yu, J., Lin, Z., Yang, J., Shen, X., Lu, X., Huang, T.S.: Generative image inpainting with contextual attention. In: CVPR (2018)

    Google Scholar 

  19. Hinton, G., et al.: Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups. IEEE Sig. Process. Mag. 29(6), 82–97 (2012)

    Article  Google Scholar 

  20. Yang, J., Wright, J., Huang, T.S., Ma, Y.: Image superresolution via sparse representation. IEEE TIP 19(11), 2861–2873 (2010)

    MATH  Google Scholar 

  21. Marco Bevilacqua, C.G., Roumy, A., Morel, M.-L.A.: Low-complexity single-image super-resolution based on nonnegative neighbor embedding. In: BMVC (2012)

    Google Scholar 

  22. Zeyde, R., Elad, M., Protter, M.: On single image scale-up using sparse-representations. In: Boissonnat, J.-D., et al. (eds.) Curves and Surfaces 2010. LNCS, vol. 6920, pp. 711–730. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-27413-8_47

    Chapter  Google Scholar 

  23. Martin, D., Fowlkes, C., Tal, D., Malik, J.: A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In: ICCV (2001)

    Google Scholar 

Download references

Acknowledgments

This work was supported in part by the National Natural Science Foundation of China (No. 61602312, 61620106008) and the Shenzhen Emerging Industries of the Strategic Basic Research Project (No. JCYJ20160226191842793).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Kwok-Wai Hung .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Hung, KW., Qiu, C., Jiang, J. (2018). Video Restoration Using Convolutional Neural Networks for Low-Level FPGAs. In: Liu, W., Giunchiglia, F., Yang, B. (eds) Knowledge Science, Engineering and Management. KSEM 2018. Lecture Notes in Computer Science(), vol 11062. Springer, Cham. https://doi.org/10.1007/978-3-319-99247-1_22

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-99247-1_22

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-99246-4

  • Online ISBN: 978-3-319-99247-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics