Advertisement

Bayesian Probit Model with \( \varvec{L}^{\varvec{\alpha}} \) and Elastic Net Regularization

  • Tao Li
  • Jinwen Ma
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10954)

Abstract

Most of the classification and regression models are established from the frequentist perspective. For certain models, the corresponding Bayesian versions have been developed. However, the Bayesian analysis of classification models has been rarely investigated yet, especially for penalized classification models. In this paper, we propose two probit models respectively with \( L^{\alpha } \) regularization and elastic net regularization from a Bayesian perspective. It is demonstrated by the experiments on a real-world dataset that the proposed probit models can have certain advantages over the frequentist models.

Keywords

Bayesian classification Probit model \( L^{\alpha } \) regularization Elastic net 

References

  1. 1.
    Friedman, J., Hastie, T., Tibshirani, R.: The Elements of Statistical Learning. Springer Series in Statistics, vol. 1. Springer, New York (2001)zbMATHGoogle Scholar
  2. 2.
    Hoerl, A.E., Kennard, R.W.: Ridge regression: biased estimation for nonorthogonal problems. Technometrics 12(1), 55–67 (1970)CrossRefGoogle Scholar
  3. 3.
    Tibshirani, R.: Regression shrinkage and selection via the Lasso. J. Roy. Stat. Soc.: Ser. B (Methodol.) 58, 267–288 (1996)MathSciNetzbMATHGoogle Scholar
  4. 4.
    Zou, H., Hastie, T.: Regularization and variable selection via the elastic net. J. Roy. Stat. Soc. Ser. B (Stat. Methodol.) 67(2), 301–320 (2005)MathSciNetCrossRefGoogle Scholar
  5. 5.
    Frank, I.E., Friedman, J.H.: A statistical view of some chemometrics regression tools. Technometrics 35(2), 109–135 (1993)CrossRefGoogle Scholar
  6. 6.
    Huang, J., Horowitz, J.L., Ma, S.: Asymptotic properties of bridge estimators in sparse high-dimensional regression models. Ann. Stat. 36, 587–613 (2008)MathSciNetCrossRefGoogle Scholar
  7. 7.
    Zou, H., Li, R.: One-step sparse estimates in nonconcave penalized likelihood models. Ann. Stat. 36(4), 1509 (2008)MathSciNetCrossRefGoogle Scholar
  8. 8.
    Polson, N.G., Scott, S.L., et al.: Data augmentation for support vector machines. Bayesian Anal. 6(1), 1–23 (2011)MathSciNetCrossRefGoogle Scholar
  9. 9.
    Park, T., Casella, G.: The Bayesian Lasso. J. Am. Stat. Assoc. 103(482), 681–686 (2008)MathSciNetCrossRefGoogle Scholar
  10. 10.
    Li, Q., Lin, N., et al.: The Bayesian elastic net. Bayesian Anal. 5(1), 151–170 (2010)MathSciNetCrossRefGoogle Scholar
  11. 11.
    Polson, N.G., Scott, J.G., Windle, J.: The Bayesian bridge. J. Roy. Stat. Soc. Ser. B (Stat. Methodol.) 76(4), 713–733 (2014)MathSciNetCrossRefGoogle Scholar
  12. 12.
    Liang, Y., Liu, C., Luan, X.-Z., Leung, K.-S., Chan, T.-M., Xu, Z.B., Zhang, H.: Sparse logistic regression with a L1/2 penalty for gene selection in cancer classification. BMC Bioinform. 14, 198 (2013)CrossRefGoogle Scholar
  13. 13.
    Xu, Z., Chang, X., Xu, F., Zhang, H.: L1/2 regularization: a thresholding representation theory and a fast solver. IEEE Trans. Neural Netw. Learn. Syst. 23(7), 1013–1027 (2012)CrossRefGoogle Scholar
  14. 14.
    Cawley, G.C., Talbot, N.L.C.: Gene selection in cancer classification using sparse logistic regression with Bayesian regularization. Bioinformatics 22(19), 2348–2355 (2006)CrossRefGoogle Scholar
  15. 15.
    Bae, K., Mallick, B.K.: Gene selection using a two-level hierarchical Bayesian model. Bioinformatics 20(18), 3423–3430 (2004)CrossRefGoogle Scholar
  16. 16.
    Geman, S., Geman, D.: Stochastic relaxation, gibbs distributions, and the Bayesian restoration of images. IEEE Trans. Pattern Anal. Mach. Intell. 6(6), 721–741 (1984)CrossRefGoogle Scholar
  17. 17.
    Albert, J.H., Chib, S.: Bayesian analysis of binary and polychotomous response data. J. Am. Stat. Assoc. 88(422), 669–679 (1993)MathSciNetCrossRefGoogle Scholar
  18. 18.
    Rodriguez-Yam, G., Davis, R.A., Scharf, L.L.: Efficient gibbs sampling of truncated multivariate normal with application to constrained linear regression. Unpublished manuscript (2004)Google Scholar
  19. 19.
    Chang, S.-M., Chen, R.-B., Chi, Y.: Bayesian variable selections for probit models with componentwise gibbs samplers. Commun. Stat. Simul. Comput. 45(8), 2752–2766 (2016)MathSciNetCrossRefGoogle Scholar
  20. 20.
    West, M.: On scale mixtures of normal distributions. Biometrika 74(3), 646–648 (1987)MathSciNetCrossRefGoogle Scholar
  21. 21.
    Lichman, M.: UCI machine learning repository (2013)Google Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Department of Information Science, School of Mathematical Sciences and LMAMPeking UniversityBeijingChina

Personalised recommendations