Advertisement

Feature Selection via Co-regularized Sparse-Group Lasso

  • Paula L. Amaral SantosEmail author
  • Sultan Imangaliyev
  • Klamer Schutte
  • Evgeni Levin
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10122)

Abstract

We propose the co-regularized sparse-group lasso algorithm: a technique that allows the incorporation of auxiliary information into the learning task in terms of “groups” and “distances” among the predictors. The proposed algorithm is particularly suitable for a wide range of biological applications where good predictive performance is required and, in addition to that, it is also important to retrieve all relevant predictors so as to deepen the understanding of the underlying biological process. Our cost function requires related groups of predictors to provide similar contributions to the final response, and thus, guides the feature selection process using auxiliary information. We evaluate the proposed method on a synthetic dataset and examine various settings where its application is beneficial in comparison to the standard lasso, elastic net, group lasso and sparse-group lasso techniques. Last but not least, we make a python implementation of our algorithm available for download and free to use (Available at www.learning-machines.com).

Keywords

Sparse models Co-regularized learning Systems biology 

Notes

Acknowledgments

This work was funded by TNO Early Research Program (ERP) “Making sense of big data”.

References

  1. 1.
    Guyon, I., Elisseeff, A.: An introduction to variable and feature selection. J. Mach. Learn. Res. 3, 1157–1182 (2003)zbMATHGoogle Scholar
  2. 2.
    Hea, Z., Weichuan Yub, W.: Stable feature selection for biomarker discovery. Comput. Biol. Chem. 34, 215–225 (2010)CrossRefGoogle Scholar
  3. 3.
    Yuan, M., Lin, Y.: Model selection and estimation in regression with grouped variables. J. R. Stat. Soc. B 68(part 1), 49–67 (2006)MathSciNetCrossRefzbMATHGoogle Scholar
  4. 4.
    Simon, N., Friedman, J., Hastie, T., Tibshirani, R.: A sparse group lasso. J.Comput. Graph. Stat. 22(2), 231–245 (2013)MathSciNetCrossRefGoogle Scholar
  5. 5.
    Simon, N., Friedman, J., Hastie, T.: A Blockwise descent algorithm for group-penalized multiresponse and multinomial regression (2013)Google Scholar
  6. 6.
    Friedman, J., Hastie, T., Tibshirani, R.: A note on the group lasso and a sparse group lasso (2010)Google Scholar
  7. 7.
    Jacob, L., Obozinski, G., Vert, J.P.: Group lasso with overlap and graph lasso. In: Proceedings of the 26th International Conference on Machine Learning, Montreal, Canada (2009)Google Scholar
  8. 8.
    Rosselló-Móra, R.: Towards a taxonomy of Bacteria and Archaea based on interactive and cumulative data repositories. Taxon. Biodivers. 14(2), 318–334 (2012)Google Scholar
  9. 9.
    Das, J., Gayvert, K.M., Bunea, F., Wegkamp, M.H., Yu, H.: ENCAPP: elastic-net-based prognosis prediction and biomarker discovery for human cancers. BMC Genomics 16(1), 263 (2015)CrossRefGoogle Scholar
  10. 10.
    Zhang, F., Hong, D.: Elastic net-based framework for imaging mass spectrometry data biomarker selection and classification. Stat. Med. 30, 753–768 (2011)MathSciNetCrossRefGoogle Scholar
  11. 11.
    James, G., Witten, D., Hastie, T., Tibshirani, R.: An Introduction to Statistical Learning. Morgan Kaufmann, San Francisco (1999)zbMATHGoogle Scholar
  12. 12.
    Tibshirani, R.: Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Ser. B 58, 267–288 (1994)MathSciNetzbMATHGoogle Scholar
  13. 13.
    Hastie, T., Zou, H.: Regularization and variable selection via the elastic net. J. R. Stat. Soc. B 67(part 2), 301–320 (2005)MathSciNetzbMATHGoogle Scholar
  14. 14.
    Hoerl, A., Kennard, R.: Ridge regression. In: Encyclopedia of Statistical Sciences, vol. 8, pp. 129–136. Wiley, New York (1988)Google Scholar
  15. 15.
    Parikh, N., Boyd, S.: Proximal Algorithms. Now Publishers Inc., Breda (2013). ISBN 978-1601987167Google Scholar
  16. 16.
    Ruijter, T., Tsivtsivadze, E., Heskes, T.: Online co-regularized algorithms. In: Ganascia, J.-G., Lenca, P., Petit, J.-M. (eds.) DS 2012. LNCS (LNAI), vol. 7569, pp. 184–193. Springer, Heidelberg (2012). doi: 10.1007/978-3-642-33492-4_16 CrossRefGoogle Scholar
  17. 17.
    Sindhwani, V., Niyogi, P., Belkin, M.: A co-regularization approach to semisupervised learning with multiple views. In: Proceedings of ICML Workshop on Learning with Multiple Views (2005)Google Scholar

Copyright information

© Springer International Publishing AG 2016

Authors and Affiliations

  • Paula L. Amaral Santos
    • 1
    Email author
  • Sultan Imangaliyev
    • 1
  • Klamer Schutte
    • 1
  • Evgeni Levin
    • 1
  1. 1.TNO ResearchThe HagueThe Netherlands

Personalised recommendations