Advertisement

LAPRAN: A Scalable Laplacian Pyramid Reconstructive Adversarial Network for Flexible Compressive Sensing Reconstruction

  • Kai XuEmail author
  • Zhikang Zhang
  • Fengbo Ren
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11214)

Abstract

This paper addresses the single-image compressive sensing (CS) and reconstruction problem. We propose a scalable Laplacian pyramid reconstructive adversarial network (LAPRAN) that enables high-fidelity, flexible and fast CS images reconstruction. LAPRAN progressively reconstructs an image following the concept of the Laplacian pyramid through multiple stages of reconstructive adversarial networks (RANs). At each pyramid level, CS measurements are fused with a contextual latent vector to generate a high-frequency image residual. Consequently, LAPRAN can produce hierarchies of reconstructed images and each with an incremental resolution and improved quality. The scalable pyramid structure of LAPRAN enables high-fidelity CS reconstruction with a flexible resolution that is adaptive to a wide range of compression ratios (CRs), which is infeasible with existing methods. Experimental results on multiple public datasets show that LAPRAN offers an average 7.47 dB and 5.98 dB PSNR, and an average 57.93\(\%\) and 33.20\(\%\) SSIM improvement compared to model-based and data-driven baselines, respectively.

Keywords

Compressive sensing Reconstruction Laplacian pyramid Reconstructive adversarial network Feature fusion 

Notes

Acknowledgement

This work is supported by NSF grant IIS/CPS-1652038 and Google Faculty Research Award.

Supplementary material

474197_1_En_30_MOESM1_ESM.pdf (5.8 mb)
Supplementary material 1 (pdf 5937 KB)

References

  1. 1.
    Arbelaez, P., Maire, M., Fowlkes, C., Malik, J.: Contour detection and hierarchical image segmentation. TPAMI 33(5), 898–916 (2011)CrossRefGoogle Scholar
  2. 2.
    Becker, S., Bobin, J., Candès, E.J.: NESTA: a fast and accurate first-order method for sparse recovery. SIAM J. Imaging Sci. 4(1), 1–39 (2011)MathSciNetCrossRefGoogle Scholar
  3. 3.
    Becker, S.R., Candès, E.J., Grant, M.C.: Templates for convex cone problems with applications to sparse signal recovery. Math. Program. Comput. 3(3), 165–218 (2011)MathSciNetCrossRefGoogle Scholar
  4. 4.
    Bevilacqua, M., Roumy, A., Guillemot, C., Alberi-Morel, M.L.: Low-complexity single-image super-resolution based on nonnegative neighbor embedding. In: BMVC, pp. 135.1–135.10 (2012)Google Scholar
  5. 5.
    Blumensath, T., Davies, M.E.: Iterative hard thresholding for compressed sensing. Appl. Comput. Harmonic Anal. 27(3), 265–274 (2009)MathSciNetCrossRefGoogle Scholar
  6. 6.
    Bora, A., Jalal, A., Price, E., Dimakis, A.G.: Compressed sensing using generative models. In: ICML, pp. 537–546 (2017)Google Scholar
  7. 7.
    Candes, E., Romberg, J., Tao, T.: Stable signal recovery from incomplete and inaccurate measurements. Commun. Pure Appl. Math. 59(8), 1207–1223 (2006)MathSciNetCrossRefGoogle Scholar
  8. 8.
    Candès, E.J., Romberg, J., Tao, T.: Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information. TIT 52(2), 489–509 (2006)MathSciNetzbMATHGoogle Scholar
  9. 9.
    Candès, E.J., Eldar, Y.C., Needell, D., Randall, P.: Compressed sensing with coherent and redundant dictionaries. Appl. Comput. Harmonic Anal. 31(1), 59–73 (2011)MathSciNetCrossRefGoogle Scholar
  10. 10.
    Cui, Z., Chang, H., Shan, S., Zhong, B., Chen, X.: Deep network cascade for image super-resolution. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 49–64. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-10602-1_4CrossRefGoogle Scholar
  11. 11.
    Dabov, K., Foi, A., Katkovnik, V., Egiazarian, K.: Image denoising by sparse 3-D transform-domain collaborative filtering. TIP 16(8), 2080–2095 (2007)MathSciNetGoogle Scholar
  12. 12.
    Davenport, M.A.: Random observations on random observations: sparse signal acquisition and processing. Ph.D. thesis, Rice University (2010)Google Scholar
  13. 13.
    Davisson, L.: Rate distortion theory: a mathematical basis for data compression. TCOM 20(6), 1202 (1972)Google Scholar
  14. 14.
    Denton, E.L., Chintala, S., Szlam, A., Fergus, R.: Deep generative image models using a laplacian pyramid of adversarial networks. In: NIPS, pp. 1486–1494 (2015)Google Scholar
  15. 15.
    Dong, C., Loy, C.C., He, K., Tang, X.: Image super-resolution using deep convolutional networks. TPAMI 38(2), 295–307 (2016)CrossRefGoogle Scholar
  16. 16.
    Dong, C., Loy, C.C., He, K., Tang, X.: Learning a deep convolutional network for image super-resolution. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8692, pp. 184–199. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-10593-2_13CrossRefGoogle Scholar
  17. 17.
    Dong, W., Shi, G., Li, X., Ma, Y., Huang, F.: Compressive sensing via nonlocal low-rank regularization. TIP 23(8), 3618–3632 (2014)MathSciNetzbMATHGoogle Scholar
  18. 18.
    Glasner, D., Bagon, S., Irani, M.: Super-resolution from a single image. In: ICCV, pp. 349–356 (2009)Google Scholar
  19. 19.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR, pp. 770–778 (2016)Google Scholar
  20. 20.
    Huggins, P.S., Zucker, S.W.: Greedy basis pursuit. TSP 55(7), 3760–3772 (2007)MathSciNetzbMATHGoogle Scholar
  21. 21.
    Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. In: ICML, pp. 448–456 (2015)Google Scholar
  22. 22.
    Kim, J., Lee, J.K., Lee, K.M.: Accurate image super-resolution using very deep convolutional networks. In: CVPR, pp. 1646–1654 (2016)Google Scholar
  23. 23.
    Kulkarni, K., Lohit, S., Turaga, P., Kerviche, R., Ashok, A.: Reconnet: non-iterative reconstruction of images from compressively sensed measurements. In: CVPR, pp. 449–458 (2016)Google Scholar
  24. 24.
    Lai, W.S., Huang, J.B., Ahuja, N., Yang, M.H.: Deep laplacian pyramid networks for fast and accurate super-resolution. In: CVPR, pp. 5835–5843 (2017)Google Scholar
  25. 25.
    LeCun, Y.A., Bottou, L., Orr, G.B., Müller, K.-R.: Efficient BackProp. In: Montavon, G., Orr, G.B., Müller, K.-R. (eds.) Neural Networks: Tricks of the Trade. LNCS, vol. 7700, pp. 9–48. Springer, Heidelberg (2012).  https://doi.org/10.1007/978-3-642-35289-8_3CrossRefGoogle Scholar
  26. 26.
    Li, C., Yin, W., Jiang, H., Zhang, Y.: An efficient augmented Lagrangian method with applications to total variation minimization. Comput. Optim. Appl. 56(3), 507–530 (2013)MathSciNetCrossRefGoogle Scholar
  27. 27.
    Metzler, C.A., Maleki, A., Baraniuk, R.G.: From denoising to compressed sensing. TIT 62(9), 5117–5144 (2016)MathSciNetzbMATHGoogle Scholar
  28. 28.
    Metzler, C., Mousavi, A., Baraniuk, R.: Learned D-AMP: principled neural network based compressive image recovery. In: NIPS, pp. 1772–1783 (2017)Google Scholar
  29. 29.
    Mousavi, A., Baraniuk, R.G.: Learning to invert: signal recovery via deep convolutional networks. In: ICASSP, pp. 2272–2276 (2017)Google Scholar
  30. 30.
    Nam, S., Davies, M., Elad, M., Gribonval, R.: The cosparse analysis model and algorithms. Appl. Comput. Harmonic Anal. 34(1), 30–56 (2013)MathSciNetCrossRefGoogle Scholar
  31. 31.
    Pathak, D., Krähenbühl, P., Donahue, J., Darrell, T., Efros, A.: Context encoders: feature learning by inpainting. In: CVPR, pp. 2536–2544 (2016)Google Scholar
  32. 32.
    Radford, A., Metz, L., Chintala, S.: Unsupervised representation learning with deep convolutional generative adversarial networks. In: ICLR (2015)Google Scholar
  33. 33.
    Schulter, S., Leistner, C., Bischof, H.: Fast and accurate image upscaling with super-resolution forests. In: CVPR, pp. 3791–3799 (2015)Google Scholar
  34. 34.
    Snoek, C.G.M., Worring, M., Smeulders, A.W.M.: Early versus late fusion in semantic video analysis. In: MM, pp. 399–402 (2005)Google Scholar
  35. 35.
    Tropp, J.A., Gilbert, A.C.: Signal recovery from random measurements via orthogonal matching pursuit. TIT 53(12), 4655–4666 (2007)MathSciNetzbMATHGoogle Scholar
  36. 36.
    Weihong Guo, W.Y.: EdgeCS: edge guided compressive sensing reconstruction. SPIE 7744, 7744 (2010)Google Scholar
  37. 37.
    Xu, K., Li, Y., Ren, F.: An energy-efficient compressive sensing framework incorporating online dictionary learning for long-term wireless health monitoring. In: ICASSP, pp. 804–808 (2016)Google Scholar
  38. 38.
    Xu, K., Li, Y., Ren, F.: A data-driven compressive sensing framework tailored for energy-efficient wearable sensing. In: ICASSP, pp. 861–865 (2017)Google Scholar
  39. 39.
    Yang, J., Wright, J., Huang, T.S., Ma, Y.: Image super-resolution via sparse representation. TIP 19(11), 2861–2873 (2010)MathSciNetzbMATHGoogle Scholar
  40. 40.
    Zeiler, M.D., Taylor, G.W., Fergus, R.: Adaptive deconvolutional networks for mid and high level feature learning. In: ICCV, pp. 1550–5499 (2011)Google Scholar
  41. 41.
    Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional networks. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8689, pp. 818–833. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-10590-1_53CrossRefGoogle Scholar
  42. 42.
    Zeyde, R., Elad, M., Protter, M.: On single image scale-up using sparse-representations. In: Boissonnat, J.-D., et al. (eds.) Curves and Surfaces 2010. LNCS, vol. 6920, pp. 711–730. Springer, Heidelberg (2012).  https://doi.org/10.1007/978-3-642-27413-8_47CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.Arizona State UniversityTempeUSA

Personalised recommendations