Skip to main content

From High Definition Image to Low Space Optimization

  • Conference paper
Scale Space and Variational Methods in Computer Vision (SSVM 2011)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 6667))

Abstract

Signal and image processing have seen in the last few years an explosion of interest in a new form of signal/image characterization via the concept of sparsity with respect to a dictionary. An active field of research is dictionary learning: Given a large amount of example signals/images one would like to learn a dictionary with much fewer atoms than examples on one hand, and much more atoms than pixels on the other hand. The dictionary is constructed such that the examples are sparse on that dictionary i.e each image is a linear combination of small number of atoms.

This paper suggests a new computational approach to the problem of dictionary learning. We show that smart non-uniform sampling, via the recently introduced method of coresets, achieves excellent results, with controlled deviation from the optimal dictionary. We represent dictionary learning for sparse representation of images as a geometric problem, and illustrate the coreset technique by using it together with the K–SVD method. Our simulations demonstrate gain factor of up to 60 in computational time with the same, and even better, performance. We also demonstrate our ability to perform computations on larger patches and high-definition images, where the traditional approach breaks down.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Agarwal, P.K., Har-Peled, S., Varadarajan, K.R.: Approximating extent measures of points. Journal of the ACM 51(4), 606–635 (2004)

    Article  MATH  MathSciNet  Google Scholar 

  2. Agarwal, P.K., Har-Peled, S., Varadarajan, K.R.: Geometric approximations via coresets. Combinatorial and Computational Geometry - MSRI Publications 52, 1–30 (2005)

    MATH  MathSciNet  Google Scholar 

  3. Aharon, M., Elad, M., Bruckstein, A.: K-SVD: An Algorithm for Designing Overcomplete Dictionaries for Sparse Representation. IEEE Transactions on Signal Processing 54(11), 4311–4322 (2006)

    Article  MATH  Google Scholar 

  4. Czumaj, A., Sohler, C.: Sublinear-time approximation algorithms for clustering via random sampling. Random Struct. Algorithms (RSA) 30(1-2), 226–256 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  5. Elad, M., Aharon, M.: Image denoising via sparse and redundant representations over learned dictionaries. IEEE Trans. Image Processing 15(12), 3736–3745 (2006)

    Article  MathSciNet  Google Scholar 

  6. Feldman, D., Fiat, A., Segev, D., Sharir, M.: Bi-criteria linear-time approximations for generalized k-mean/median/center. In: Proc. 23rd ACM Symp. on Computational Geometry (SOCG), pp. 19–26 (2007)

    Google Scholar 

  7. Feldman, D., Langberg, M.: A unified framework for approximating and clustering data (submitted, 2010) manuscript

    Google Scholar 

  8. Feldman, D., Monemizadeh, M., Sohler, C.: A PTAS for k-means clustering based on weak coresets. In: Proc. 23rd ACM Symp. on Computational Geometry (SoCG), pp. 11–18 (2007)

    Google Scholar 

  9. Har-Peled, S.: Low rank matrix approximation in linear time (2006) manuscript

    Google Scholar 

  10. Kreutz-Delgado, K., Murray, J.F., Rao, B.D., Engan, K., Lee, T.W., Sejnowski, T.J.: Dictionary learning algorithms for sparse representation. Neural Computation 15(2), 349–396 (2003)

    Article  MATH  Google Scholar 

  11. Lesage, S., Gribonval, R., Bimbot, F., Benaroya, L.: Learning unions of orthonormal bases with thresholded singular value decomposition. In: IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2005, vol. 5, IEEE, Los Alamitos (2005)

    Google Scholar 

  12. Pati, Y.C., Rezaiifar, R., Krishnaprasad, P.S.: Orthogonal matching pursuit: Recursive function approximation with applications to wavelet decomposition. In: 1993 Conference Record of The Twenty-Seventh Asilomar Conference on Signals, Systems and Computers, pp. 40–44. IEEE, Los Alamitos (2002)

    Google Scholar 

  13. Rubinstein, R.: Technical report, http://www.cs.technion.ac.il/~ronrubin/software/ksvdbox13.zip

  14. Vapnik, V.N., Chervonenkis, A.Y.: On the uniform convergence of relative frequencies of events to their probabilities. Theory of Probability and its Applications 16(2), 264–280 (1971)

    Article  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Feigin, M., Feldman, D., Sochen, N. (2012). From High Definition Image to Low Space Optimization. In: Bruckstein, A.M., ter Haar Romeny, B.M., Bronstein, A.M., Bronstein, M.M. (eds) Scale Space and Variational Methods in Computer Vision. SSVM 2011. Lecture Notes in Computer Science, vol 6667. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-24785-9_39

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-24785-9_39

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-24784-2

  • Online ISBN: 978-3-642-24785-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics