Skip to main content

Binary Sparse Coding

  • Conference paper
Latent Variable Analysis and Signal Separation (LVA/ICA 2010)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 6365))

Abstract

We study a sparse coding learning algorithm that allows for a simultaneous learning of the data sparseness and the basis functions. The algorithm is derived based on a generative model with binary latent variables instead of continuous-valued latents as used in classical sparse coding. We apply a novel approach to perform maximum likelihood parameter estimation that allows for an efficient estimation of all model parameters. The approach is a new form of variational EM that uses truncated sums instead of factored approximations to the intractable posterior distributions. In contrast to almost all previous versions of sparse coding, the resulting learning algorithm allows for an estimation of the optimal degree of sparseness along with an estimation of the optimal basis functions. We can thus monitor the time-course of the data sparseness during the learning of basis functions. In numerical experiments on artificial data we show that the algorithm reliably extracts the true underlying basis functions along with noise level and data sparseness. In applications to natural images we obtain Gabor-like basis functions along with a sparseness estimate. If large numbers of latent variables are used, the obtained basis functions take on properties of simple cell receptive fields that classical sparse coding or ICA-approaches do not reproduce.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Comon, P.: Independent Component Analysis, a new concept? Signal Process 36(3), 287–314 (1994)

    Article  MATH  Google Scholar 

  2. Olshausen, B., Field, D.: Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature 381, 607–609 (1996)

    Article  Google Scholar 

  3. Olshausen, B., Millman, K.: Learning sparse codes with a mixture-of-Gaussians prior. In: Proc NIPS, vol. 12, pp. 841–847 (2000)

    Google Scholar 

  4. Ringach, D.L.: Spatial Structure and Symmetry of Simple-Cell Receptive Fields in Macaque Primary Visual Cortex. J. Neurophysiol. 88, 455–463 (2002)

    Google Scholar 

  5. Neal, R., Hinton, G.: A view of the EM algorithm that justifies incremental, sparse, and other variants. In: Learning in Graphical Models, pp. 355–369 (1998)

    Google Scholar 

  6. Lücke, J., Eggert, J.: Expectation Truncation And the Benefits of Preselection in Training Generative Models. J. Mach Learn Res., revision in review (2010)

    Google Scholar 

  7. Lücke, J., Sahani, M.: Maximal causes for non-linear component extraction. J. Mach. Learn. Res. 9, 1227–1267 (2008)

    MathSciNet  Google Scholar 

  8. Lücke, J., Turner, R., Sahani, M., Henniges, M.: Occlusive Components Analysis. In: Proc NIPS, vol. 22, pp. 1069–1077 (2009)

    Google Scholar 

  9. Hoyer, P.: Non-negative sparse coding. In: Neural Networks for Signal Processing XII: Proceedings of the IEEE Workshop, pp. 557–565 (2002)

    Google Scholar 

  10. Hateren, J., Schaaf, A.: Independent Component Filters of Natural Images Compared with Simple Cells in Primary Visual Cortex. Proc. Biol. Sci. 265(1394), 359–366 (1998)

    Article  Google Scholar 

  11. Lücke, J.: Receptive Field Self-Organization in a Model of the Fine Structure in V1 Cortical Columns. Neural Computation (2009)

    Google Scholar 

  12. Berkes, P., Turner, R., Sahani, M.: On sparsity and overcompleteness in image models. In: Proc NIPS, vol. 20 (2008)

    Google Scholar 

  13. Hinton, G., Ghahramani, Z.: Generative models for discovering sparse distributed representations. Phil. Trans. R Soc. B 352(1358), 1177 (1997)

    Article  Google Scholar 

  14. Harpur, G., Prager, R.: Development of low entropy coding in a recurrent network. Network-Comp. Neural. 7, 277–284 (1996)

    Article  Google Scholar 

  15. Haft, M., Hofman, R., Tresp, V.: Generative binary codes. Pattern Anal. Appl. 6(4), 269–284 (2004)

    MathSciNet  Google Scholar 

  16. Lücke, J., Sahani, M.: Generalized softmax networks for non-linear component extraction. In: de Sá, J.M., Alexandre, L.A., Duch, W., Mandic, D.P. (eds.) ICANN 2007. LNCS, vol. 4668, pp. 657–667. Springer, Heidelberg (2007)

    Chapter  Google Scholar 

  17. Rehn, M., Sommer, F.T.: A network that uses few active neurones to code visual input predicts the diverse shapes of cortical receptive fields. J. Comput. Neurosci. 22(2), 135–146 (2007)

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2010 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Henniges, M., Puertas, G., Bornschein, J., Eggert, J., Lücke, J. (2010). Binary Sparse Coding. In: Vigneron, V., Zarzoso, V., Moreau, E., Gribonval, R., Vincent, E. (eds) Latent Variable Analysis and Signal Separation. LVA/ICA 2010. Lecture Notes in Computer Science, vol 6365. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-15995-4_56

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-15995-4_56

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-15994-7

  • Online ISBN: 978-3-642-15995-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics