Feature Selection Based on Information Theory Filters
Feature selection is an essential component in all data mining applications. Ranking of futures was made by several inexpensive methods based on information theory. Accuracy of neural, similarity based and decision tree classifiers calculated with reduced number of features. Comparison with computationally more expensive feature elimination methods was made.
KeywordsFeature Selection Mutual Information Feature Selection Method Feature Ranking Decision Tree Classifier
Unable to display preview. Download preview PDF.
- 1.Duch W, Grudzinski K (1999) The weighted k-NN method with selection of features and its neural realization, 4th Conference on Neural Networks and Their Applications, Zakopane, May 1999, pp. 191–196Google Scholar
- 3.Jankowski N, Kadirkamanathan V. (1997) Statistical control of RBF-like networks for classification. 7th Int. Conf. on Artificial Neural Networks, Lausanne, Switzerland, Springer Verlag, pp. 385–390Google Scholar
- 4.Setiono R, Liu H. (1996) Improving Backpropagation learning with feature selection. Applied Intelligence: The International Journal of Artifical Intelligence, Neural Networks, and Complex Problem-Solving Technologies 6, 129–139Google Scholar
- 5.Kohavi R. (1995) Wrappers for performance enhancement and oblivious decision graphs. PhD thesis, Dept. of Computer Science, Stanford UniversityGoogle Scholar
- 7.Grgbczewski K. Duch W. (2000) The Separability of Split Value Criterion, 5th Conference on Neural Networks and Soft Computing, Zakopane, Poland, pp. 201–208Google Scholar
- 9.Witten I.H, Frank E. (2000) Data mining. Morgan Kaufmann Publishers, San FranciscoGoogle Scholar