Abstract
Multiple classifier methods are effective solutions to difficult pattern recognition problems. However, empirical successes and failures have not been completely explained. Amid the excitement and confusion, uncertainty persists in the optimality of method choices for specific problems due to strong data dependences of classifier performance. In response to this, I propose that further exploration of the methodology be guided by detailed descriptions of geometrical characteristics of data and classifier models.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
L. Devroye, Any discrimination rule can have an arbitrarily bad probability of error for finite sample size, IEEE Transactions on Pattern Analysis and Machine Intelligence, 4,2, March 1982, 154–157.
L. Devroye, Automatic pattern recognition: a study of the probability of error, IEEE Transactions on Pattern Analysis and Machine Intelligence, 10,4, July 1988, 530–599.
R.P.W. Duin, Compactness and Complexity of Pattern Recognition Problems, in C. Perneel, eds., Proc. Int. Symposium on Pattern Recognition, In Memoriam Pierre Devijver, Royal Military Academy, Brussels, Feb 12, 1999, 124–128.
R.P.W. Duin, Classifiers in almost empty spaces, Proceedings of the 15th International Conference on Pattern Recognition, Barcelona, September 3–8, 2000, II, 1–7.
J.L. Engvall, A least upper bound for the average classification accuracy of multiple observers, Pattern Recognition, 12, 415–419.
P.C. Fishburn, The Theory of Social Choice, Princeton University Press, Princeton, 1972.
K. Fukunaga, D.L. Kessell, Estimation of classification error, IEEE Transactions on Computers, 20,12, December 1971, 1521–1527.
S. Geman, E. Bienenstock, R. Doursat, Neural networks and the bias/variance dilemma, Neural Computation, 4, 1992, 1–58.
D. Gernert, Distance or similarity measures which respect the internal structure of objects, Methods of Operations Research, 43, 1981, 329–335.
G. Giacinto, F. Roli, A theoretical framework for dynamic classifier selection, Proceedings of the 15th International Conference on Pattern Recognition, Barcelona, September 3–8, 2000, II, 8–11.
D.J. Hand, Recent advances in error rate estimation, Pattern Recognition Letters, 4, October 1986, 335–346.
T.K. Ho, M. Basu, Measuring the Complexity of Classification Problems, Proceedings of the 15th International Conference on Pattern Recognition, Barcelona, September 3–8, 2000, II, 43–47.
T.K. Ho, Complexity of classification problems and comparative advantages of combined classifiers, in J. Kittler, F. Roli, eds., Multiple Classifier Systems, Lecture Notes in Computer Science 1857, Springer, 2000, 97–106.
A. Hoekstra, R.P.W. Duin, On the nonlinearity of pattern classifiers, Proc. of the 13th ICPR, Vienna, August 1996, D271–275.
A.K. Jain, R.P.W. Duin, J. Mao, Statistical pattern recognition: A review, IEEE Transactions on Pattern Analysis and Machine Intelligence, PAMI-22,1, January 2000, 4–37.
J. Kittler, P.A. Devijver, Statistical properties of error estimators in performance assessment of recognition systems, IEEE Transactions on Pattern Analysis and Machine Intelligence, 4,2, March 1982, 215–220.
J. Kittler, M. Hatef, R.P.W. Duin, J. Matas, On combining classifiers, IEEE Transactions on Pattern Analysis and Machine Intelligence, PAMI-20,3, March 1998, 226–239.
E.M. Kleinberg, An overtraining-resistant stochastic modeling method for pattern recognition, Annals of Statistics, 4,6, December 1996, 2319–2349.
L.I. Kuncheva, C.J. Whitaker, C.A. Shipp, R.P.W. Duin, Is independence good for combining classifiers? Proceedings of the 15th International Conference on Pattern Recognition, Barcelona, September 3–8, 2000, II, 168–171.
M. LeBlanc, R. Tibshirani, Combining estimates in regression and classi_cation, Journal of the American Statistical Association, 91,436, December 1996, 1641–1650.
V.D. Mazurov, A.I. Krivonogov, V.L. Kazantsev, Solving of Optimization and Identification Problems by the Committee Methods, Pattern Recognition, 20,4, 1987, 371–378.
S. Raudys, V. Pikelis, On dimensionality, sample size, classification error, and complexity of classification algorithm in pattern recognition, IEEE Transactions on Pattern Analysis and Machine Intelligence, 2,3, May 1980, 242–252.
S. Raudys, A.K. Jain, Small sample size effects in statistical pattern recognition: Recommendations for practitioners, IEEE Transactions on Pattern Analysis and Machine Intelligence, 13,3, 1991, 252–264.
S.Y. Sohn, Meta analysis of classification algorithms for pattern recognition, IEEE Transactions on Pattern Analysis and Machine Intelligence, 21,11, 1999, 1137–1144.
G.T. Toussaint, Bibliography on estimation of misclassification, IEEE Transactions on Information Theory, 20,4, July 1974, 472–479.
K. Tumer, J. Ghosh, Analysis of decision boundaries in linearly combined neural classifiers, Pattern Recognition, 29, 1996, 341–348.
V. Vapnik, Estimation of Dependences Based on Empirical Data, Springer-Verlag, 1982.
V. Vapnik, Statistical Learning Theory, John Wiley & Sons, 1998.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2001 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Kam Ho, T. (2001). Data Complexity Analysis for Classifier Combination. In: Kittler, J., Roli, F. (eds) Multiple Classifier Systems. MCS 2001. Lecture Notes in Computer Science, vol 2096. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-48219-9_6
Download citation
DOI: https://doi.org/10.1007/3-540-48219-9_6
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-42284-6
Online ISBN: 978-3-540-48219-2
eBook Packages: Springer Book Archive