A Novel Approach for Combining Experts Rating Scores
Based on the same information, subjects are classified into two categories by many experts, independently. The overall accuracy of prediction differs from expert to expert. Most of the time, the overall accuracy can be improved by taking the vote of the experts committee, say by simply averaging the ratings of the experts. In this study, we introduced the ROC invariant representation of experts rating scores and proposed the use of beta distribution for characterizing experts rating scores for each subject. The momentum estimators of the two shape parameters of beta distribution can be used as additional features to the experts rating scores or equivalents. To increase the diversity of selections of combined score, we applied a boosting procedure to a set of nested regression models. With the proposed approach, we were able to win the large AUC task during the 2009 Australia Data Mining Analytical Challenge. The advantages of this approach are less computing intensive, easy to implement and apparent to user, and most of all, it produces much better result than the simple averaging, say. For an application with a base consists of hundreds of millions of subjects, 1% improvement in predictive accuracy will mean a lot. Our method which requires less efforts and resources will be one more plus to practitioners.
KeywordsModel Ensemble ROC Curves Data Mining
Unable to display preview. Download preview PDF.
- 1.Bell, R., Koren, Y., Volinsky, C.: The BellKor 2008 Solution to the Netflix Prize (2008), www2.Research.att.com/~volinsky/Netflix/Bellkor2008.pdf
- 2.The Ensemble. Netflix Prize Conclusion, www.the-ensemble.com
- 5.Sullivan, J., Langford, J., Blum, A.: Featureboost: A meta-learning algorithm that improves model robustness. In: Proceedings of the Seventeenth International Conference on Machine Learning (2000)Google Scholar
- 6.Caruana, R., Niculescu-Mizil, A., Crew, G., Ksikes, A.: Ensemble Selection for Libraries of Models. In: Proceedings of International Conference on Machine Learning, vol. 64 (2004)Google Scholar
- 7.Blom, G.: Bagging predictors. In: Statistical Estimates and Transformed Beta Variables. John Wiley and Sons, New York (1958)Google Scholar