Comparative Testing of Hyper-Planar Classifiers on Continuous Data Domains
This paper details a set of comparative tests conducted between five classification algorithms using three real world continuously valued data sets. The algorithms were selected to represent the two most popular classification methods, neural networks and decision trees as well as hybrid algorithms which incorporate features of both techniques. These hybrid algorithms construct an architecture to model the problem domain.
The three real world data sets have previously been used in the StatLog tests  and these experiments can be viewed as an extension of this work. Due to the nature of these data sets, each contains some level of noise which affects the learning procedure to varying degrees. A maximum bound on a classifier’s generalisation is discussed, which is due to the loss of information incurred when allowing for noise in a data domain model.
The results of these tests establish the levels of performance which can be achieved using hyperplanic classifiers on noisy continuously valued data sets.
KeywordsDecision Region Pure Class Average Classification Rate Random Search Technique Entropy Network
Unable to display preview. Download preview PDF.
- Michie D., Spiegelhalter D.J., Taylor C.C.: Machine Learning, Neural and Statistical Classification, Ellis Hopwood Series in Artificial Intelligence, Ellis Hopwood, 1994.Google Scholar
- Bai B. and Farhat N. H.: Learning Networks for Extrapolation and Radar Target Identification, Neural Networks, pp. 507–529, 1992.Google Scholar
- Hertz J., Krogh A., Palmer R.: Introduction to the Theory of Neural Computation, Sante Fe Institute, Addison Wesley, 1991.Google Scholar
- McLean D., Bandar Z., O’Shea J.: The Evolution of a Feed Forward Neural Network trained under Back-Propagation, ICANNGA ‘97, 1997.Google Scholar
- Sankar A. and Mammone R.J.: Speaker Independent Vowel Recognition using Neural Tree Networks, Proceedings of the International Joint Conference on Neural Networks, Vol.2, pp. 809–814, 1991.Google Scholar
- Sethi I.K and Otten M.: Comparison Between Entropy Net and Decision Tree Classifiers, International Joint Conference on Neural Networks, Vol.3, pp. 63–68, 1990.Google Scholar
- McLean D., Bandar Z., O’Shea J.: Improved Interpolation and Extrapolation from Continuous Training Examples Using a New Neuronal Model with an Adaptive Steepness, 2nd Australian and New Zealand Conference on Intelligent Information Systems, IEEE, pp. 125–129, 1994.Google Scholar
- McLean D.: RDSE Algorithm, http://www.doc.mmu.ac.Uk/STAFF/D.McLean/RDSE, 1998.
- Quinlan J.R.: Induction of Decision Trees, Machine Learning, Vol. 1, pp. 81–106, 1986.Google Scholar