A New Way to Build Oblique Decision Trees Using Linear Programming
Adding linear combination splits to decision trees allows multivariate relations to be expressed more accurately and succinctly than univariate splits alone. In order to determine an oblique hyperplane which distinguishes two sets, linear programming is proposed to be used. This formulation yields a straightforward way to treat missing values. Computational comparison of that linear programming approach algorithm with classical univariate split algorithms proofs the interest of this method.
Key wordsOblique decision tree missing values linear programming
Unable to display preview. Download preview PDF.
- Bennet (1992). Decision tree construction via linear programming, Computer Sciences Technical report 1067.Google Scholar
- Celeux (1988). Le traitement des valeurs manquantes dans le logiciel SICLA.Google Scholar
- Chvatal (1993). Linear Programming, W.H. Freman and Compagny.Google Scholar
- Mangasarian, Setiono & Wolberg (1990). Pattern recognition via linear programming: Theory and application to medical diagnosis, in: S.I.A.M. Workshop on optimisation.Google Scholar
- Michie, Spiegelhalter & Taylor (1994). Machine learning, neural and statistical classification. Ellis Herwood Series in Artificial Intelligence.Google Scholar
- Murthy, Kasif & Salzberg (1994). A system for induction of oblique decision trees in: Journal of Artificial Intelligence Research 2, 1–32.Google Scholar
- Quinlan (1993). C4.5: Programs for machine learning. Morgan Kaufmann.Google Scholar
- Quinlan (1989). Unknown attribute values in induction in Segre. Proceedings of the sixth International Workshop on Machine Learning, 164–168.Google Scholar