Comparative Testing of Hyper-Planar Classifiers on Continuous Data Domains

  • David McLean
  • Zuhair Bandar
Conference paper


This paper details a set of comparative tests conducted between five classification algorithms using three real world continuously valued data sets. The algorithms were selected to represent the two most popular classification methods, neural networks and decision trees as well as hybrid algorithms which incorporate features of both techniques. These hybrid algorithms construct an architecture to model the problem domain.

The three real world data sets have previously been used in the StatLog tests [1] and these experiments can be viewed as an extension of this work. Due to the nature of these data sets, each contains some level of noise which affects the learning procedure to varying degrees. A maximum bound on a classifier’s generalisation is discussed, which is due to the loss of information incurred when allowing for noise in a data domain model.

The results of these tests establish the levels of performance which can be achieved using hyperplanic classifiers on noisy continuously valued data sets.


Decision Region Pure Class Average Classification Rate Random Search Technique Entropy Network 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [1]
    Michie D., Spiegelhalter D.J., Taylor C.C.: Machine Learning, Neural and Statistical Classification, Ellis Hopwood Series in Artificial Intelligence, Ellis Hopwood, 1994.Google Scholar
  2. [2]
    Bai B. and Farhat N. H.: Learning Networks for Extrapolation and Radar Target Identification, Neural Networks, pp. 507–529, 1992.Google Scholar
  3. [3]
    Chow M. and Magnum P.: Incipient Fault Detection in DC Machines Using a Neural Network, IEE 22nd Asilomar Conference on Signals, Systems and Computers, Vol. 2, pp. 706–709, 1989.CrossRefGoogle Scholar
  4. [4]
    Rumelhart D., Hinton G., Williams R.: Learning Representations by Back-Propagating Errors, Letters to Nature, Vol. 323, pp. 533–535, 1986.CrossRefGoogle Scholar
  5. [5]
    Hertz J., Krogh A., Palmer R.: Introduction to the Theory of Neural Computation, Sante Fe Institute, Addison Wesley, 1991.Google Scholar
  6. [6]
    McLean D., Bandar Z., O’Shea J.: The Evolution of a Feed Forward Neural Network trained under Back-Propagation, ICANNGA ‘97, 1997.Google Scholar
  7. [7]
    Sethi I.K., Sarvaryudu G.P.R.: Hierarchical Classifier Design Using Mutual Information, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol PAMI-4, No. 4, pp. 441–445, 1982.CrossRefGoogle Scholar
  8. [8]
    Sankar A. and Mammone R.J.: Optimal Pruning of Neural Tree Networks for Improved Generalisation. IEEE International Joint Conference on Neural Networks — Seattle, Vol. 2, pp. 219–224, 1991.CrossRefGoogle Scholar
  9. [9]
    Sankar A. and Mammone R.J.: Speaker Independent Vowel Recognition using Neural Tree Networks, Proceedings of the International Joint Conference on Neural Networks, Vol.2, pp. 809–814, 1991.Google Scholar
  10. [10]
    Sethi I.K: Entropy Nets: From Decision Trees to Neural Networks, Proceedings Of The IEEE, vol. 78, No 10, pp. 1605–1613, 1990.CrossRefGoogle Scholar
  11. [11]
    Sethi I.K and Otten M.: Comparison Between Entropy Net and Decision Tree Classifiers, International Joint Conference on Neural Networks, Vol.3, pp. 63–68, 1990.Google Scholar
  12. [12]
    McLean D., Bandar Z., O’Shea J.: Improved Interpolation and Extrapolation from Continuous Training Examples Using a New Neuronal Model with an Adaptive Steepness, 2nd Australian and New Zealand Conference on Intelligent Information Systems, IEEE, pp. 125–129, 1994.Google Scholar
  13. [13]
    McLean D., Bandar Z., O’Shea J:, An Empirical Comparison of Back Proagation and the RDSE Algorithm on Continuously Valued Real World Data, Neural Networks, 11, pp. 1685–1694, 1998.CrossRefGoogle Scholar
  14. [14]
    McLean D.: RDSE Algorithm,, 1998.
  15. [15]
    Quinlan J.R.: Induction of Decision Trees, Machine Learning, Vol. 1, pp. 81–106, 1986.Google Scholar
  16. [16]
    Baba N.: A New Approach for Finding the Global Minimum of Error Function of Neural Networks, Neural Networks, Vol. 2, pp. 367–373, 1989.CrossRefGoogle Scholar
  17. [17]
    Lachenbruch P. and Mickey M.: Estimation of Error Rates in Discriminant Analysis, Technometrics, Vol.10, pp. 1–11, 1968.MathSciNetCrossRefGoogle Scholar

Copyright information

© Springer-Verlag Wien 1999

Authors and Affiliations

  • David McLean
    • 1
  • Zuhair Bandar
    • 1
  1. 1.The Intelligent Systems GroupThe Manchester Metropolitan UniversityManchesterUK

Personalised recommendations