Classifier Selection by Clustering
This paper proposes an innovative combinational algorithm for improving the performance of classifier ensembles both in stabilities of their results and in their accuracies. The proposed method uses bagging and boosting as the generators of base classifiers. Base classifiers are kept fixed as decision trees during the creation of the ensemble. Then we partition the classifiers using a clustering algorithm. After that by selecting one classifier per each cluster, we produce the final ensemble. The weighted majority vote is taken as consensus function of the ensemble. We evaluate our framework on some real datasets of UCI repository and the results show effectiveness of the algorithm comparing with the original bagging and boosting algorithms.
KeywordsDecision Tree Classifier Ensembles Bagging AdaBoosting
- 1.Blake, C.L., Merz, C.J.: UCI Repository of machine learning databases (1998), http://www.ics.uci.edu/~mlearn/MLRepository.html
- 5.Gunter, S., Bunke, H.: Creation of classifier ensembles for handwritten word recognition using feature selection algorithms. IWFHR (2002)Google Scholar
- 7.Minaei-Bidgoli, B., Topchy, A.P., Punch, W.F.: Ensembles of Partitions via Data Resampling. In: ITCC, pp. 188–192 (2004)Google Scholar
- 8.Yang, T.: Computational Verb Decision Trees. International Journal of Computational Cognition, 34–46 (2006)Google Scholar