Abstract
Deep convolutional neural networks (CNNs) have attracted wide attentions for video restoration in the last few years. Due to enormous computational complexity of deep CNNs, implementations on high-level FPGAs have been proposed to achieve the power efficient solutions. However, low-end devices, such as mobile devices and low-level FPGAs, have very limited processing capabilities, such as limited logic gates and memory bandwidth. In this paper, we propose a power-efficient design of CNNs for implementation on low-level FPGAs for near real-time video frame restoration. Specifically, our video restoration method reduces the model parameters by analyzing the network hyper-parameters. Fixed-point quantization is adopted during the training process to improve the processing frame rate while retaining the PSNR quality. Hence, the computational requirement of the proposed CNNs is alleviated for implementation using OpenCL framework on a low-level FPGA with only 85K logic gates. Experimental results show that the proposed FPGA platform consumes more than 8 times less power than the CPU and GPU implementations.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Dong, C., Loy, C.C., He, K., Tang, X.: Image superresolution using deep convolutional networks. TPAMI 38(2), 295–307 (2015)
Kim, J., Lee, J.K., Lee, K.M.: Accurate image superresolution using very deep convolutional networks. In: CVPR (2016)
Kim, J., Lee, J.K., Lee, K.M.: Deeply-recursive convolutional network for image super-resolution. In: CVPR (2016)
Lai, W.-S., Huang, J.-B., Ahuja, N., Yang, M.-H.: Deep laplacian pyramid networks for fast and accurate superresolution. In: CVPR (2017)
Tai, Y., Yang, J., Liu, X., Xu, C.: Memnet: a persistent memory network for image restoration. In: ICCV (2017)
Lim, B., Son, S., Kim, H., Nah, S., Lee, K.M.: Enhanced deep residual networks for single image super-resolution. In: CVPR (2017)
Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In: CVPR (2018)
Wu, J., Leng, C., Wang, Y., Hu, Q., Cheng, J.: Quantized convolutional neural networks for mobile devices. In: CVPR (2016)
Chen, X., Hu, X., Xu, N.: FxpNet: training a deep convolutional neural network in fixed-point representation. In: IJCNN (2017)
Lai, L., Suda, N., Chandra, V.: Deep convolutional neural network inference with floating-point weights and fixed-point activations. In: CVPR (2017)
Lin, D.D., Talathi, S.S., Sreekanth Annapureddy, V.: Fixed point quantization of deep convolutional networks. In: ICML (2016)
Zhang, C., Li, P., Sun, G., Guan, Y., Xiao, B., Cong, J.: Optimizing FPGA-based accelerator design for deep convolutional neural networks. In: FPGA (2015)
Suda, N., Chandra, V., Dasika, G., Mohanty, A., Ma, Y.: Throughout-optimized OpenCL-based FPGA accelerator for large-scale convolutional neural networks. In: FPGA (2016)
Motamedi, M., Gysel, P., Akella, V., Ghiasi, S.: Design space exploration of FPGA-based deep convolutional neural networks. In: ASP-DAC (2016)
Pouchet, L., Zhang, P., Sadayappan, P., Cong, J.: Polyhedral-based data reuse optimization for configurable computing. In: FPGA (2013)
Hubara, I., Courbariaux, M., Soudry, D., EI-Yaniv, R., Bengio, Y.: Quantized neural networks: training neural networks with low precision weights and activations. arXiv preprint arXiv: 1609.07061 (2016)
Yang, C., Lu, X., Lin, Z., Shechtman, E., Wang, O., Li, H.: High-resolution image inpainting using multi-scale neural patch synthesis. In: CVPR (2017)
Yu, J., Lin, Z., Yang, J., Shen, X., Lu, X., Huang, T.S.: Generative image inpainting with contextual attention. In: CVPR (2018)
Hinton, G., et al.: Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups. IEEE Sig. Process. Mag. 29(6), 82–97 (2012)
Yang, J., Wright, J., Huang, T.S., Ma, Y.: Image superresolution via sparse representation. IEEE TIP 19(11), 2861–2873 (2010)
Marco Bevilacqua, C.G., Roumy, A., Morel, M.-L.A.: Low-complexity single-image super-resolution based on nonnegative neighbor embedding. In: BMVC (2012)
Zeyde, R., Elad, M., Protter, M.: On single image scale-up using sparse-representations. In: Boissonnat, J.-D., et al. (eds.) Curves and Surfaces 2010. LNCS, vol. 6920, pp. 711–730. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-27413-8_47
Martin, D., Fowlkes, C., Tal, D., Malik, J.: A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In: ICCV (2001)
Acknowledgments
This work was supported in part by the National Natural Science Foundation of China (No. 61602312, 61620106008) and the Shenzhen Emerging Industries of the Strategic Basic Research Project (No. JCYJ20160226191842793).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer Nature Switzerland AG
About this paper
Cite this paper
Hung, KW., Qiu, C., Jiang, J. (2018). Video Restoration Using Convolutional Neural Networks for Low-Level FPGAs. In: Liu, W., Giunchiglia, F., Yang, B. (eds) Knowledge Science, Engineering and Management. KSEM 2018. Lecture Notes in Computer Science(), vol 11062. Springer, Cham. https://doi.org/10.1007/978-3-319-99247-1_22
Download citation
DOI: https://doi.org/10.1007/978-3-319-99247-1_22
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-99246-4
Online ISBN: 978-3-319-99247-1
eBook Packages: Computer ScienceComputer Science (R0)