Advertisement

Correlation-Based and Causal Feature Selection Analysis for Ensemble Classifiers

  • Rakkrit Duangsoithong
  • Terry Windeatt
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5998)

Abstract

High dimensional feature spaces with relatively few samples usually leads to poor classifier performance for machine learning, neural networks and data mining systems. This paper presents a comparison analysis between correlation-based and causal feature selection for ensemble classifiers. MLP and SVM are used as base classifier and compared with Naive Bayes and Decision Tree. According to the results, correlation-based feature selection algorithm can eliminate more redundant and irrelevant features, provides slightly better accuracy and less complexity than causal feature selection. Ensemble using Bagging algorithm can improve accuracy in both correlation-based and causal feature selection.

Keywords

Correlation-based feature selection causal feature selection ensemble classification 

References

  1. 1.
    Liu, H., Yu, L.: Toward integrating feature selection algorithms for classification and clustering. IEEE Transactions on Knowledge and Data Engineering 17(4), 491–502 (2005)CrossRefGoogle Scholar
  2. 2.
    Saeys, Y., Inza, I., Larranaga, P.: A review of feature selection techniques in bioinformatics. Bioinformatics 23(19), 2507–2517 (2007)CrossRefGoogle Scholar
  3. 3.
    Duangsoithong, R., Windeatt, T.: Relevance and Redundancy Analysis for Ensemble Classifiers. In: Perner, P. (ed.) Machine Learning and Data Mining in Pattern Recognition. LNCS, vol. 5632, pp. 206–220. Springer, Heidelberg (2009)CrossRefGoogle Scholar
  4. 4.
    Guyon, I., Aliferis, C., Elisseeff, A.: Causal Feature Selection. In: Liu, H., Motoda, H. (eds.) Computational Methods of Feature Selection. Chapman and Hall, Boca Raton (2007)Google Scholar
  5. 5.
    Aliferis, C.F., Tsamardinos, I., Statnikov, A.: HITON, A Novel Markov Blanket Algorithm for Optimal Variable Selection. In: AMIA 2003 Annual Symposium Proceedings, pp. 21–25 (2003)Google Scholar
  6. 6.
    Windeatt, T.: Ensemble MLP Classifier Design, vol. 137, pp. 133–147. Springer, Heidelberg (2008)Google Scholar
  7. 7.
    Windeatt, T.: Accuracy/diversity and ensemble MLP classifier design. IEEE Transactions on Neural Networks 17(5), 1194–1211 (2006)CrossRefGoogle Scholar
  8. 8.
    Witten, I.H., Frank, E.: Data Mining: Practical Machine Learning Tools and Techniques, 2nd edn. Morgan Kaufmann, San Francisco (2005)zbMATHGoogle Scholar
  9. 9.
    Almuallim, H., Dietterich, T.G.: Learning with many irrelevant features. In: Proceedings of the Ninth National Conference on Artificial Intelligence, pp. 547–552. AAAI Press, Menlo Park (1991)Google Scholar
  10. 10.
    Hall, M.A.: Correlation-based feature selection for discrete and numeric class machine learning. In: Proceeding of the 17th International Conference on Machine Learning, pp. 359–366. Morgan Kaufmann, San Francisco (2000)Google Scholar
  11. 11.
    Yu, L., Liu, H.: Efficient feature selection via analysis of relevance and redundancy. J. Mach. Learn. Res. 5, 1205–1224 (2004)MathSciNetGoogle Scholar
  12. 12.
    Tsamardinos, I., Brown, L.E., Aliferis, C.F.: The max-min hill-climbing Bayesian network structure learning algorithm. Machine Learning 65, 31–78 (2006)CrossRefGoogle Scholar
  13. 13.
    Wang, M., Chen, Z., Cloutier, S.: A hybrid Bayesian network learning method for constructing gene networks. Computational Biology and Chemistry 31, 361–372 (2007)zbMATHCrossRefGoogle Scholar
  14. 14.
    Spirtes, P., Glymour, C., Schinese, R.: Causation, Prediction, and search. Springer, New York (1993)zbMATHGoogle Scholar
  15. 15.
    Cheng, J., Bell, D., Liu, W.: Learning Belief Networks from Data: An Information theory Based Approach. In: Proceedings of the Sixth ACM International Conference on Information and Knowledge Management, pp. 325–331 (1997)Google Scholar
  16. 16.
    Tsamardinos, I., Aliferis, C.F., Statnikov, A.: Time and Sample Efficient Discovery of Markov Blankets and Direct Causal Relations. In: KDD 2003, Washington DC, USA (2004)Google Scholar
  17. 17.
    Friedman, N., Nachman, I., Peer, D.: Learning of Bayesian Network Structure from Massive Datasets: The “Sparse Candidate” Algorithm. In: Proceedings of the 15th Conference on Uncertainty in Artificial Intelligence (UAI), pp. 206–215. Morgan Kaufmann, Stockholme (1999)Google Scholar
  18. 18.
    Pudil, P., Novovicova, J., Kitler, J.: Floating Search Methods in Feature Selection. Pattern Recognition Letters 15, 1,119–1,125 (1994)CrossRefGoogle Scholar
  19. 19.
    Brown, L.E., Tsamardinos, I., Aliferis, C.F.: A Novel Algorithm for Scalable and Accurate Bayesian Network Learning. Medinfo. 11, 711–715 (2004)Google Scholar
  20. 20.
    Brown, L.E., Tsamardinos, I.: Markov Blanket-Based Variable Selection. Technical Report DSL TR-08-01 (2008)Google Scholar
  21. 21.
    Breiman, L.: Bagging predictors. Machine Learning 24(2), 123–140 (1996)zbMATHMathSciNetGoogle Scholar
  22. 22.
    Asuncion, A., Newman, D.: UCI machine learning repository (2007), http://www.ics.uci.edu/mlearn/MLRepository.html
  23. 23.
    Guyon, I.: Causality Workbench (2008), http://www.causality.inf.ethz.ch/home.php

Copyright information

© Springer-Verlag Berlin Heidelberg 2010

Authors and Affiliations

  • Rakkrit Duangsoithong
    • 1
  • Terry Windeatt
    • 1
  1. 1.Center for Vision, Speech and Signal ProcessingUniversity of SurreyGuildfordUnited Kingdom

Personalised recommendations