Advertisement

Extreme Gradient Boosting with Squared Logistic Loss Function

  • Nonita SharmaEmail author
  • Anju
  • Akanksha Juneja
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 748)

Abstract

Tree boosting has empirically proven to be a highly effective and versatile approach for predictive modeling. The core argument is that tree boosting can adaptively determine the local neighborhoods of the model thereby taking the bias-variance trade-off into consideration during model fitting. Recently, a tree boosting method known as XGBoost has gained popularity by providing higher accuracy. XGBoost further introduces some improvements which allow it to deal with the bias-variance trade-off even more carefully. In this manuscript, performance accuracy of XGBoost is further enhanced by applying a loss function named squared logistics loss (SqLL). Accuracy of the proposed algorithm, i.e., XGBoost with SqLL, is evaluated using test/train method, K-fold cross-validation, and stratified cross-validation method.

Keywords

Boosting Extreme gradient boosting Squared logistic loss 

References

  1. 1.
    Chen, T., Guestrin, C.: XGBoost: reliable large-scale tree boosting system. In: Proceedings of the 22nd SIGKDD Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 2015, pp. 13–17Google Scholar
  2. 2.
    Freund, Y., Schapire, R., Abe, N.: A short introduction to boosting. J. Jpn. Soc. Artif. Intell. 14(771–780), 1612 (1999)Google Scholar
  3. 3.
    Zhou, Z.-H.: Ensemble Methods: Foundations and Algorithms. CRC Press (2012)Google Scholar
  4. 4.
    Schapire, R.E.: Explaining Adaboost. In: Empirical Inference, pp. 37–52. Springer (2013)Google Scholar
  5. 5.
    Culp, M., Johnson, K., Michailidis, G.: ada: an r package for stochastic boosting. J. Stat. Softw. 17(2), 9 (2006)CrossRefGoogle Scholar
  6. 6.
    Chen, T., Guestrin, C.: XGBoost: a scalable tree boosting system. In: Proceedings of the 22nd ACM SIGKDD International Conference on knowledge Discovery and Data Mining, pp. 785–794. ACM (2016)Google Scholar
  7. 7.
    Ho, T.K.: The random subspace method for constructing decision forests. IEEE Trans. Pattern Anal. Mach. Intell. 20(8), 832–844 (1998)CrossRefGoogle Scholar
  8. 8.
    Friedman, J., Hastie, T., Tibshirani, R., et al.: Additive logistic regression: a statistical view of boosting (with discussion and a rejoinder by the authors). Ann. Stat. 28(2), 337–407 (2000)CrossRefGoogle Scholar
  9. 9.
    Mason, L., Baxter, J., Bartlett, P.L., Frean, M.R.: Boosting algorithms as gradient descent. In: Advances in Neural Information Processing Systems, pp. 512–518 (2000)Google Scholar
  10. 10.
    Kanamori, T., Takenouchi, T., Eguchi, S., Murata, N.: The most robust loss function for boosting. In: Neural Information Processing, pp. 496–501. Springer (2004)Google Scholar
  11. 11.
    Masnadi-Shirazi, H., Vasconcelos, N.: On the design of loss functions for classification: theory, robustness to outliers, and savageboost. In: Advances in Neural Information Processing Systems, pp. 1049–1056 (2009)Google Scholar
  12. 12.
    Schapire, R.E.: The strength of weak learnability. Mach. Learn. 5(2), 197–227 (1990)Google Scholar
  13. 13.
    Friedman, J.H.: Greedy function approximation: a gradient boosting machine. Ann. Stat. 1189–1232 (2001)Google Scholar
  14. 14.
    Zhai, S., Xia, T., Tan, M., Wang, S.: Direct 0-1 loss minimization and margin maximization with boosting. In: Advances in Neural Information Processing Systems, pp. 872–880 (2013)Google Scholar
  15. 15.
    Kearns, M., Valiant, L.: Cryptographic limitations on learning boolean formulae and finite automata. J. ACM (JACM) 41(1), 67–95 (1994)MathSciNetCrossRefGoogle Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2019

Authors and Affiliations

  1. 1.Dr. B. R. Ambedkar National Institute of Technology JalandharJalandharIndia
  2. 2.Jawaharlal Nehru UniversityNew DelhiIndia

Personalised recommendations