Skip to main content

Dictionary Learning Based on Laplacian Score in Sparse Coding

  • Conference paper
Machine Learning and Data Mining in Pattern Recognition (MLDM 2011)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 6871))

Abstract

Sparse coding, which produces a vector representation based on sparse linear combination of dictionary atoms, has been widely applied in signal processing, data mining and neuroscience. Constructing a proper dictionary for sparse coding is a common challenging problem. In this paper, we treat dictionary learning as an unsupervised learning process, and propose a Laplacian score dictionary (LSD). This new learning method uses local geometry information to select atoms for the dictionary. Comparisons with alternative clustering based dictionary learning methods are conducted. We also compare LSD with full-training-data-dictionary and others classic methods in the experiments. The classification performances on binary-class datasets and multi-class datasets from UCI repository demonstrate the effectiveness and efficiency of our method.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Yang, J., Yu, K., Gong, Y., Huang, T.: Linear spatial pyramid matching using sparse coding for image classification. In: IEEE Conference on CVPR, pp. 1794–1801 (2009)

    Google Scholar 

  2. Elad, M., Aharon, M.: Image denoising via sparse and redundant representations over learned dictionaries. IEEE Transactions on Image Processing 15(12), 3736–3745 (2006)

    Article  MathSciNet  Google Scholar 

  3. Mairal, J., Bach, F., Ponce, J., Sapiro, G.: Online Dictionary Learning for Sparse Coding. In: International Conference on Machine Learning (2009)

    Google Scholar 

  4. Aharon, M., Elad, M., Bruckstein, A.M.: The K-SVD: An algorithm for designing of overcomplete dictionaries for sparse representations. IEEE Transactions on Signal Processing 54(11), 4311–4322 (2006)

    Article  Google Scholar 

  5. Zhang, L., Yang, M., Feng, Z., Zhang, D.: On the dimensionality reduction for sparse representation based face recognition. In: 20th International Conference on Pattern Recognition (ICPR), pp. 1237–1240 (2010)

    Google Scholar 

  6. Gregor, K., LeCun, Y.: Learning fast approximations of sparse coding. In: International Conference on Machine Learning, Haifa, Israel, pp. 399–406 (2010)

    Google Scholar 

  7. Sprechmann, P., Sapiro, G.: Dictionary learning and sparse coding for unsupervised clustering. In: IEEE International Conference on Acoustics Speech and Signal Processing, (ICASSP) (2010)

    Google Scholar 

  8. He, X., Cai, D., Niyogi, P.: Laplacian score for feature selection. Neural Information Processing Systems (2005)

    Google Scholar 

  9. Zhao, J., Lu, K., He, X.: Locality sensitive semi-supervised feature selection. Neurocomputing 71, 1842–1849 (2008)

    Article  Google Scholar 

  10. Frank, A., Asuncion, A.: UCI machine learning repository (2010), http://archive.ics.uci.edu/ml

  11. Wright, J., Yang, A.Y., Ganesh, A., Sastry, S.S., Ma, Y.: Robust face recognition via sparse representation. IEEE Transactions on Pattern Analysis and Machine Intelligence 31(2), 210–227 (2009)

    Article  Google Scholar 

  12. Kim, S., Koh, K., Lustig, M., Boyd, S., Gorinevsky, D.: An interior-point method for largescale l1-regularized least squares. IEEE Journal of Selected Topics in Signal Processing 1(4), 606–617 (2007)

    Article  Google Scholar 

  13. Engan, K., Aase, S.O., Husey, J.H.: Multi-frame compression: theory and design. Signal Process 80, 2121–2140 (2000)

    Article  MATH  Google Scholar 

  14. Lee, H., Battle, A., Raina, R., Ng, A.Y.: Efficient sparse coding algorithms. Advances in Neural Informa- tion Processing Systems 19, 801–808 (2007)

    Google Scholar 

  15. Mairal, J., Bach, F., Ponce, J., Sapiro, G., Zisserman, A.: Supervised dictionary learning. In: Advances in Neural Information Processing Systems, (NIPS) (2008)

    Google Scholar 

  16. Labusch, K., Barth, E., Martinetz, T.: Sparse Coding Neural Gas: Learning of Overcomplete Data Representations. Neurocomputing 72, 1547–1555 (2009)

    Article  Google Scholar 

  17. Athitsos, V., Sclaroff, S.: Boosting nearest neighbor classifiers for multiclass recognition. In: IEEE CVPR Workshops (2005)

    Google Scholar 

  18. Chang, C.C., Lin, C.J.: LIBSVM: A library for support vector machines, Software (2001), http://www.csie.ntu.edu.tw/~cjlin/libsvm

  19. Lee, Y., Lin, Y., Wahba, G.: Multicategory support vector machines, theory, and application to the classification of microarray data and satellite radiance data. Journal of the American Statistical Association 99, 67–81 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  20. Kohonen, T.: SOM TOOLBOX, Software (2005), http://www.cis.hut.fi/projects/somtoolbox/

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Xu, J., Man, H. (2011). Dictionary Learning Based on Laplacian Score in Sparse Coding. In: Perner, P. (eds) Machine Learning and Data Mining in Pattern Recognition. MLDM 2011. Lecture Notes in Computer Science(), vol 6871. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-23199-5_19

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-23199-5_19

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-23198-8

  • Online ISBN: 978-3-642-23199-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics