Abstract
The classification performance of an associative classification algorithm is strongly dependent on the statistic measure or metric that is used to quantify the strength of the association between features and classes (i.e., confidence, correlation, etc.). Previous studies have shown that classification algorithms produced using different metrics may predict conflicting outputs for the same input, and that the best metric to use is data-dependent and rarely known while designing the algorithm (Veloso et al. Competence–conscious associative classification. Stati Anal Data Min 2(5–6):361–377,2009; The metric dillema: competence–conscious associative classification. In: Proceeding of the SIAM Data Mining Conference (SDM). SIAM, 2009). This uncertainty concerning the optimal match between metrics and problems is a dilemma, and prevents associative classification algorithms to achieve their maximal performance . A possible solution to this dilemma is to exploit the competence, expertise, or assertiveness of classification algorithms produced using different metrics. The basic idea is that each of these algorithms has a specific sub-domain for which it is most competent (i.e., there is a set of inputs for which this algorithm consistently provides more accurate predictions than algorithms produced using other metrics). Particularly, we investigate stacking -based meta-learning methods, which use the training data to find the domain of competence of associative classification algorithms produced using different metrics. The result is a set of competing algorithms that are produced using different metrics. The ability to detect which of these algorithms is the most competent one for a given input leads to new algorithms , which are denoted as competence–conscious associative classification algorithms .
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Agrawal, R., Imielinski, T., Swami, A.: Mining association rules between sets of items in large databases. In: Proceedings of the International Conference on Management of Data (SIGMOD), pp. 207–216. ACM Press (1993)
Antonie, M., Zaïane, O., Holte, R.: Learning to use a learned model: a two-stage approach to classification. In: Proceedings of the International Conference on Data Mining (ICDM), pp. 33–42. IEEE Computer Society (2006)
Arunasalam, B., Chawla, S.: CCCS: a top-down associative classifier for imbalanced class distribution. In: Proceedings of the International Conference on Data Mining and Knowledge Discovery (KDD), pp. 517–522. ACM Press (2006)
Breiman, L.: Bagging predictors. Mach.Learn. 24(2), 123–140 (1996)
Chang, C.-C., Lin, C.-J.: LIBSVM: a library for support vector machines, 2001. Available at http://www.csie.ntu.edu.tw/∼cjlin/papers/libsvm.pdf
Ferri, C., Flach, P., Hernández-Orallo, J.: Delegating classifiers. In: Proceedings of the International Conference on Machine Learning (ICML), p. 37. ACM Press (2004)
Fürnkranz, J., Flach, P.: An analysis of rule evaluation metrics. In: Proceedings of the International Conference on Machine Learning (ICML), pp. 202–209. IEEE Computer Society (2003)
Gama, J., Brazdil, P.: Cascade generalization. Mach. Learn. 45, 315–343 (2000)
Hilderman, R., Hamilton, H.: Evaluation of interestingness measures for ranking discovered knowledge. In: Proceedings of the Pacific-Asia Conference on Research and Development in Knowledge Discovery and Data Mining (PAKDD), pp. 247–259. Springer (2001)
Lavrac, N., Flach, P., Zupan, B.: Rule evaluation measures: a unifying view. Induct. Log. Prog. 1634, 174–185 (1999)
Ortega, J., Koppel, M., Argamon, S.:Arbitrating among competing classifiers using learned referees. Knowl. Inf. Syst.3, 470–490 (2001)
Schapire, R.: A brief introduction to boosting. In: Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), pp. 1401–1406. Morgen Kaufmann, San Francisco (1999)
Tan, P., Kumar, V., Srivastava, J.: Selecting the right interestingness measure for association patterns. In: Proceedings of the International Conference on Data Mining and Knowledge Discovery (KDD), pp. 32–41. ACM Press (2002)
Tsymbal, A., Pechenizkiy, M., Cunningham, P.: Dynamic integration with random forests. In: Proceedings of the European Conference on Machine Learning (ECML), pp. 801–808. Springer (2006)
Wolpert, D.: Stacked generalization. Neural Netw. 5(2), 241–259 (1992)
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2011 Adriano Veloso
About this chapter
Cite this chapter
Veloso, A., Meira, W. (2011). Competence–Conscious Associative Classification. In: Demand-Driven Associative Classification. SpringerBriefs in Computer Science. Springer, London. https://doi.org/10.1007/978-0-85729-525-5_6
Download citation
DOI: https://doi.org/10.1007/978-0-85729-525-5_6
Published:
Publisher Name: Springer, London
Print ISBN: 978-0-85729-524-8
Online ISBN: 978-0-85729-525-5
eBook Packages: Computer ScienceComputer Science (R0)