Advertisement

Logistic Model Trees

  • Niels Landwehr
  • Mark Hall
  • Eibe Frank
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2837)

Abstract

Tree induction methods and linear models are popular techniques for supervised learning tasks, both for the prediction of nominal classes and continuous numeric values. For predicting numeric quantities, there has been work on combining these two schemes into ‘model trees’, i.e. trees that contain linear regression functions at the leaves. In this paper, we present an algorithm that adapts this idea for classification problems, using logistic regression instead of linear regression. We use a stagewise fitting process to construct the logistic regression models that can select relevant attributes in the data in a natural way, and show how this approach can be used to build the logistic regression models at the leaves by incrementally refining those constructed at higher levels in the tree. We compare the performance of our algorithm against that of decision trees and logistic regression on 32 benchmark UCI datasets, and show that it achieves a higher classification accuracy on average than the other two methods.

Keywords

Logistic Regression Model Logistic Regression Model Tree Linear Regression Model Regression Tree 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Blake, C.L., Merz, C.J.: UCI repository of machine learning databases (1998), http://www.ics.uci.edu/~mlearn/MLRepository.html
  2. 2.
    Breiman, L., Friedman, H., Olshen, J.A., Stone, C.J.: Classification and Regression Trees. Wadsworth, Belmont (1984)zbMATHGoogle Scholar
  3. 3.
    Chaudhuri, P., Lo, W.-D., Loh, W.-Y., Yang, C.-C.: Generalized regression trees. Statistica Sinica 5, 641–666 (1995)zbMATHMathSciNetGoogle Scholar
  4. 4.
    Frank, E., Wang, Y., Inglis, S., Holmes, G., Witten, I.H.: Using model trees for classification. Machine Learning 32(1), 63–76 (1998)zbMATHCrossRefGoogle Scholar
  5. 5.
    Freund, Y., Schapire, R.E.: Experiments with a new boosting algorithm. In: Proc. Int. Conf. on Machine Learning, pp. 148–156. Morgan Kaufmann, San Francisco (1996)Google Scholar
  6. 6.
    Friedman, J., Hastie, T., Tibshirani, R.: Additive logistic regression: a statistical view of boosting. The Annals of Statistic 38(2), 337–374 (2000)CrossRefMathSciNetGoogle Scholar
  7. 7.
    Hastie, T., Tibshirani, R., Friedman, J.: The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Springer, Heidelberg (2001)zbMATHGoogle Scholar
  8. 8.
    Lim, T.-S.: Polytomous Logistic Regression Trees. PhD thesis, Department of Statistics, University of Wisconsin (2000)Google Scholar
  9. 9.
    Nadeau, C., Bengio, Y.: Inference for the generalization error. In: Advances in Neural Information Processing Systems 12, pp. 307–313. MIT Press, Cambridge (1999)Google Scholar
  10. 10.
    Perlich, C., Provost, F.: Tree induction vs logistic regression. In: Beyond Classification and Regression (NIPS 2002 Workshop) (2002)Google Scholar
  11. 11.
    Quinlan, J.R.: Learning with Continuous Classes. In: 5th Australian Joint Conference on Artificial Intelligence, pp. 343–348 (1992)Google Scholar
  12. 12.
    Quinlan, R.: C4.5: Programs for Machine Learning. Morgan Kaufmann, San Francisco (1993)Google Scholar
  13. 13.
    Wang, Y., Witten, I.: Inducing model trees for continuous classes. In: Proc of Poster Papers, European Conf. on Machine Learning (1997)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2003

Authors and Affiliations

  • Niels Landwehr
    • 1
    • 2
  • Mark Hall
    • 2
  • Eibe Frank
    • 2
  1. 1.Department of Computer ScienceUniversity of FreiburgFreiburgGermany
  2. 2.Department of Computer ScienceUniversity of WaikatoHamiltonNew Zealand

Personalised recommendations