Improving the Competency of Classifiers through Data Generation
This paper describes a hybrid approach in which sub-symbolic neural networks and symbolic machine learning algorithms are grouped into an ensemble of classifiers. Initially each classifier determines which portion of the data it is most competent in. The competency information is used to generated new data that are used for further training and prediction. The application of this approach in a difficult to learn domain shows an increase in the predictive power, in terms of the accuracy and level of competency of both the ensemble and the component classifiers.
KeywordsProblem Domain Training Instance Data Generation Process Disjunctive Normal Form Decision Tree Algorithm
Unable to display preview. Download preview PDF.
- S Cost S. Salzberg, 1993. A Weighted Nearest Neighbor Algorithm for Learning with Symbolic Features. Machine Learning, 10(1), pp. 57–78.Google Scholar
- P Clark T. Niblett, 1989. The CN2 rule in duction program, Machine learning, pp.261–283.Google Scholar
- T Dietterich, 1997. Machine Learning Research: Four Current Directions. Artificial Intelligence, 18(4), pp.97–136.Google Scholar
- R Maclin D Opitz, 1997. An Empirical Evaluation of Bagging and Boosting. Proc. of 14th National Conf. on Artificial Intelligence, AAAI/MIT Press, 1997, pp. 546–55l.Google Scholar
- JR. Quinlan, 1994. C4.5: Programs for Machine Learning, Morgan Kaufmann, California: USA.Google Scholar
- R Shapiro Y Freud P Bartlett W Lee, 1997. Boosting the margin: A new explanation of the effectiveness of the voting methods. Proc. of 14th Intern. Conf. on Machine Learning, Morgan Kaufmann, pp. 322–330.Google Scholar
- SB Thrun et al, 1991. The Monk’s problems: A Performance Comparison of Different Learning Algorithms. Technical Report CMU-CS-91-17. Computer Science Department, Carnegie Mellon University, Pittsburgh: USA.Google Scholar
- HL Viktor, AP Engelbrecht, I Cloete, 1995. Reduction of Symbolic Rules from Artificial Neural Networks using Sensitivity Analysis, IEEE International Conference on Neural Networks (ICNN’95), Perth: Australia, pp.1788–1793.Google Scholar
- HL Viktor. 2000. Generating new patterns for information gain and improved neural network learning, The International Joint Conference on Neural Networks (IJCNN-OO), Como: Italy.Google Scholar