Advertisement

Joint Optimization for Compressive Video Sensing and Reconstruction Under Hardware Constraints

  • Michitaka YoshidaEmail author
  • Akihiko Torii
  • Masatoshi Okutomi
  • Kenta Endo
  • Yukinobu Sugiyama
  • Rin-ichiro Taniguchi
  • Hajime Nagahara
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11214)

Abstract

Compressive video sensing is the process of encoding multiple sub-frames into a single frame with controlled sensor exposures and reconstructing the sub-frames from the single compressed frame. It is known that spatially and temporally random exposures provide the most balanced compression in terms of signal recovery. However, sensors that achieve a fully random exposure on each pixel cannot be easily realized in practice because the circuit of the sensor becomes complicated and incompatible with the sensitivity and resolution. Therefore, it is necessary to design an exposure pattern by considering the constraints enforced by hardware. In this paper, we propose a method of jointly optimizing the exposure patterns of compressive sensing and the reconstruction framework under hardware constraints. By conducting a simulation and actual experiments, we demonstrated that the proposed framework can reconstruct multiple sub-frame images with higher quality.

Keywords

Compressive sensing Video reconstruction Deep neural network 

Notes

Acknowledgement

This work was supported by JSPS KAKENHI (Grant Number 18K19818).

References

  1. 1.
    Kleinfelder, S., Lim, S., Liu, X., El Gamal, A.: A 10000 frames/s CMOS digital pixel sensor. IEEE J. Solid-State Circ. 36(12), 2049–2059 (2001)CrossRefGoogle Scholar
  2. 2.
    Sonoda, T., Nagahara, H., Endo, K., Sugiyama, Y., Taniguchi, R.: High-speed imaging using CMOS image sensor with quasi pixel-wise exposure. In: International Conference on Computational Photography (ICCP), pp. 1–11 (2016)Google Scholar
  3. 3.
    Hitomi, Y., Gu, J., Gupta, M., Mitsunaga, T., Nayar, S.K.: Video from a single coded exposure photograph using a learned over-complete dictionary. In: International Conference on Computer Vision (ICCV), pp. 287–294 (2011)Google Scholar
  4. 4.
    Yang, J., et al.: Video compressive sensing using Gaussian mixture models. IEEE Trans. Image Process. 23(11), 4863–4878 (2014)MathSciNetCrossRefGoogle Scholar
  5. 5.
    Iliadis, M., Spinoulas, L., Katsaggelos, A.K.: Deep fully-connected networks for video compressive sensing. Digit. Sig. Process. 72, 9–18 (2018)CrossRefGoogle Scholar
  6. 6.
    Iliadis, M., Spinoulas, L., Katsaggelos, A.K.: DeepBinaryMask: learning a binary mask for video compressive sensing. arXiv preprint arXiv:1607.03343 (2016)
  7. 7.
    Sarhangnejad, N., Lee, H., Katic, N., O’Toole, M., Kutulakos, K., Genov, R.: CMOS image sensor architecture for primal-dual coding. In: International Image Sensor Workshop (2017)Google Scholar
  8. 8.
    LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436 (2015)CrossRefGoogle Scholar
  9. 9.
    Dadkhah, M., Deen, M.J., Shirani, S.: Compressive sensing image sensors-hardware implementation. Sensors 13(4), 4961–4978 (2013)CrossRefGoogle Scholar
  10. 10.
    Robucci, R., Gray, J.D., Chiu, L.K., Romberg, J., Hasler, P.: Compressive sensing on a CMOS separable-transform image sensor. Proc. IEEE 98(6), 1089–1101 (2010)CrossRefGoogle Scholar
  11. 11.
    Dadkhah, M., Deen, M.J., Shirani, S.: Block-based CS in a CMOS image sensor. IEEE Sens. J. 14(8), 2897–2909 (2014)CrossRefGoogle Scholar
  12. 12.
    Majidzadeh, V., Jacques, L., Schmid, A., Vandergheynst, P., Leblebici, Y.: A (256–256) pixel 76.7 mW CMOS imager/compressor based on real-time in-pixel compressive sensing. In: International Symposium on Circuits and Systems (ISCAS) (2010)Google Scholar
  13. 13.
    Oike, Y., El Gamal, A.: CMOS image sensor with per-column \({\sum }{\Delta }\) ADC and programmable compressed sensing. IEEE J. Solid-State Circ. 48(1), 318–328 (2013)CrossRefGoogle Scholar
  14. 14.
    Spinoulas, L., He, K., Cossairt, O., Katsaggelos, A.: Video compressive sensing with on-chip programmable subsampling. In: IEEE Conference on Computer Vision and Pattern Recognition Workshops (2015)Google Scholar
  15. 15.
    Aharon, M., Elad, M., Bruckstein, A.: K-SVD: an algorithm for designing overcomplete dictionaries for sparse representation. IEEE Trans. Sig. Process. 54(11), 4311–4322 (2006)CrossRefGoogle Scholar
  16. 16.
    Pati, Y.C., Rezaiifar, R., Krishnaprasad, P.S.: Orthogonal matching pursuit: recursive function approximation with applications to wavelet decomposition. In: The Twenty-Seventh Asilomar Conference on Signals, Systems and Computers (1993)Google Scholar
  17. 17.
    Tibshirani, R.: Regression shrinkage and selection via the lasso. J. Roy. Stat. Soc. Ser. B (Methodol.) 58, 267–288 (1996)MathSciNetzbMATHGoogle Scholar
  18. 18.
    Courbariaux, M., Hubara, I., Soudry, D., El-Yaniv, R., Bengio, Y.: Binarized neural networks: training deep neural networks with weights and activations constrained to +1 or \(-1\). arXiv preprint arXiv:1602.02830 (2016)
  19. 19.
    Hamamatsu Photonics K.K. Imaging device. Japan patent JP2015-216594A (2015)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.Kyushu UniversityFukuokaJapan
  2. 2.Tokyo Institute of TechnologyTokyoJapan
  3. 3.Hamamatsu Photonics K.K.HamamatsuJapan
  4. 4.Osaka UniversitySuitaJapan

Personalised recommendations