Abstract
We define and discuss a novel sparse coding algorithm based on closed-form EM updates and continuous latent variables. The underlying generative model consists of a standard ‘spike-and-slab’ prior and a Gaussian noise model. Closed-form solutions for E- and M-step equations are derived by generalizing probabilistic PCA. The resulting EM algorithm can take all modes of a potentially multimodal posterior into account. The computational cost of the algorithm scales exponentially with the number of hidden dimensions. However, with current computational resources, it is still possible to efficiently learn model parameters for medium-scale problems. Thus, the algorithm can be applied to the typical range of source separation tasks. In numerical experiments on artificial data we verify likelihood maximization and show that the derived algorithm recovers the sparse directions of standard sparse coding distributions. On source separation benchmarks comprised of realistic data we show that the algorithm is competitive with other recent methods.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
Amari, S., Cichocki, A., Yang, H.H.: A new learning algorithm for blind signal separation. In: NIPS, pp. 757–763. MIT Press (1996)
Bishop, C.: Pattern Recognition and Machine Learning. Springer, Heidelberg (2006)
Cichocki, A., Amari, S., Siwek, K., Tanaka, T., Phan, A., Zdunek, R.: ICALAB-MATLAB Toolbox Version 3 (2007)
Dayan, P., Abbott, L.F.: Theoretical Neuroscience. MIT Press, Cambridge (2001)
Jaakkola, T.: Tutorial on variational approximation methods. In: Opper, M., Saad, D. (eds.) Advanced Mean Field Methods: Theory and Practice. MIT Press (2000)
Jordan, M., Ghahramani, Z., Jaakkola, T., Saul, L.: An introduction to variational methods for graphical models. Machine Learning 37, 183–233 (1999)
Knowles, D., Ghahramani, Z.: Infinite Sparse Factor Analysis and Infinite Independent Components Analysis. In: Davies, M.E., James, C.J., Abdallah, S.A., Plumbley, M.D. (eds.) ICA 2007. LNCS, vol. 4666, pp. 381–388. Springer, Heidelberg (2007)
Knowles, D., Ghahramani, Z.: Nonparametric Bayesian sparse factor models with application to gene expression modelling. CoRR, abs/1011.6293 (2010)
Lee, H., Battle, A., Raina, R., Ng, A.: Efficient sparse coding algorithms. In: NIPS, vol. 22, pp. 801–808 (2007)
Lücke, J., Sheikh, A.-S.: Closed-form EM for sparse coding and its application to source separation. arXiv:1105.2493v1 [stat.ML]
Mairal, J., Bach, F., Ponce, J., Sapiro, G.: Online dictionary learning for sparse coding. In: ICML, p. 87 (2009)
Mitchell, T.J., Beauchamp, J.J.: Bayesian variable selection in linear regression. Journal of the American Statistical Association 83(404), 1023–1032 (1988)
Mohamed, S., Heller, K., Ghahramani, Z.: Sparse exponential family latent variable models. In: NIPS Workshop (2010)
Neal, R., Hinton, G.: A view of the EM algorithm that justifies incremental, sparse, and other variants. In: Jordan, M.I. (ed.) Learning in Graphical Models. Kluwer (1998)
Olshausen, B., Field, D.: Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature 381, 607–609 (1996)
Paisley, J.W., Carin, L.: Nonparametric factor analysis with beta process priors. In: ICML, p. 98 (2009)
Seeger, M.: Bayesian inference and optimal design for the sparse linear model. JMLR 9, 759–813 (2008)
Suzuki, T., Sugiyama, M.: Least-squares independent component analysis. Neural Computation 23(1), 284–301 (2011)
Teh, Y.W., Görür, D., Ghahramani, Z.: Stick-breaking construction for the indian buffet process. Journal of Machine Learning Research - Proceedings Track 2, 556–563 (2007)
West, M.: Bayesian factor regression models in the ”large p, small n” paradigm. In: Bayesian Statistics, vol. 7, pp. 723–732. Oxford University Press (2003)
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 2012 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Lücke, J., Sheikh, AS. (2012). Closed-Form EM for Sparse Coding and Its Application to Source Separation. In: Theis, F., Cichocki, A., Yeredor, A., Zibulevsky, M. (eds) Latent Variable Analysis and Signal Separation. LVA/ICA 2012. Lecture Notes in Computer Science, vol 7191. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-28551-6_27
Download citation
DOI: https://doi.org/10.1007/978-3-642-28551-6_27
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-28550-9
Online ISBN: 978-3-642-28551-6
eBook Packages: Computer ScienceComputer Science (R0)