The Science of Pattern Recognition. Achievements and Perspectives

  • Robert P. W. Duin
  • Elżbieta Pekalska
Part of the Studies in Computational Intelligence book series (SCI, volume 63)


Automatic pattern recognition is usually considered as an engineering area which focusses on the development and evaluation of systems that imitate or assist humans in their ability of recognizing patterns. It may, however, also be considered as a science that studies the faculty of human beings (and possibly other biological systems) to discover, distinguish, characterize patterns in their environment and accordingly identify new observations. The engineering approach to pattern recognition is in this view an attempt to build systems that simulate this phenomenon. By doing that, scientific understanding is gained of what is needed in order to recognize patterns, in general.


Pattern Recognition Graph Match Pattern Recognition Problem Statistical Pattern Recognition Support Vector Data Description 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [1]
    A.G. Arkadev and E.M. Braverman. Computers and Pattern Recognition. Thompson, Washington, DC, 1966.Google Scholar
  2. [2]
    M. Basu and T.K. Ho, editors. Data Complexity in Pattern Recognition. Springer, 2006.Google Scholar
  3. [3]
    R. Bergmann. Developing Industrial Case-Based Reasoning Applications. Springer, 2004.Google Scholar
  4. [4]
    C.M. Bishop. Neural Networks for Pattern Recognition. Oxford University Press, 1995.Google Scholar
  5. [5]
    H. Bunke. Recent developments in graph matching. In International Conference on Pattern Recognition, volume 2, pages 117-124, 2000.Google Scholar
  6. [6]
    H. Bunke, S. Günter, and X. Jiang. Towards bridging the gap between statistical and structural pattern recognition: Two new concepts in graph matching. In International Conference on Advances in Pattern Recognition, pages 1-11, 2001.Google Scholar
  7. [7]
    H. Bunke and K. Shearer. A graph distance metric based on the maximal common subgraph. Pattern Recognition Letters, 19(3-4):255-259, 1998.zbMATHCrossRefGoogle Scholar
  8. [8]
    V.S. Cherkassky and F. Mulier. Learning from data: Concepts, Theory and Methods. John Wiley & Sons, Inc., New York, NY, USA, 1998.zbMATHGoogle Scholar
  9. [9]
    T.M. Cover. Geomerical and statistical properties of systems of linear inequalities with applications in pattern recognition. IEEE Transactions on Electronic Computers, EC-14:326-334, 1965.CrossRefGoogle Scholar
  10. [10]
    T.M. Cover and P.E. Hart. Nearest Neighbor Pattern Classification. IEEE Transactions on Information Theory, 13(1):21-27, 1967.zbMATHCrossRefGoogle Scholar
  11. [11]
    T.M. Cover and J.M. van Campenhout. On the possible orderings in the measurement selection problem. IEEE Transactions on Systems, Man, and Cybernetics, SMC-7(9):657-661, 1977.CrossRefGoogle Scholar
  12. [12]
    I.M. de Diego, J.M. Moguerza, and A. Muñoz. Combining kernel information for support vector classification. In Multiple Classifier Systems, pages 102-111. Springer-Verlag, 2004.Google Scholar
  13. [13]
    A. Dempster, N. Laird, and D. Rubin. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society, Series B, 39(1):1-38, 1977.zbMATHMathSciNetGoogle Scholar
  14. [14]
    L. Devroye, L. Györfi, and G. Lugosi. A Probabilistic Theory of Pattern Recognition. Springer-Verlag, 1996.Google Scholar
  15. [15]
    R.O. Duda, P.E. Hart, and D.G. Stork. Pattern Classification. John Wiley & Sons, Inc., 2nd edition, 2001.Google Scholar
  16. [16]
    R.P.W. Duin. Four scientific approaches to pattern recognition. In Fourth Quinquennial Review 1996-2001. Dutch Society for Pattern Recognition and Image Processing, pages 331-337. NVPHBV, Delft, 2001.Google Scholar
  17. [17]
    R.P.W. Duin and E. Pekalska. Open issues in pattern recognition. In Computer Recognition Systems, pages 27-42. Springer, Berlin, 2005.CrossRefGoogle Scholar
  18. [18]
    R.P.W. Duin, E. Pękalska, P. Paclík, and D.M.J. Tax. The dissimilarity representation, a basis for domain based pattern recognition? In L. Gold-farb, editor, Pattern representation and the future of pattern recognition, ICPR 2004 Workshop Proceedings, pages 43-56, Cambridge, United Kingdom, 2004.Google Scholar
  19. [19]
    R.P.W. Duin, E. Pękalska, and D.M.J. Tax. The characterization of classification problems by classifier disagreements. In International Conference on Pattern Recognition, volume 2, pages 140-143, Cambridge, United Kingdom, 2004.Google Scholar
  20. [20]
    R.P.W. Duin, F. Roli, and D. de Ridder. A note on core research issues for statistical pattern recognition. Pattern Recognition Letters, 23(4):493-499,2002.zbMATHCrossRefGoogle Scholar
  21. [21]
    S. Edelman. Representation and Recognition in Vision. MIT Press, Cambridge, 1999.Google Scholar
  22. [22]
    B. Efron and R.J. Tibshirani. An Introduction to the Bootstrap. Chapman & Hall, London, 1993.zbMATHGoogle Scholar
  23. [23]
    P. Flach and A. Kakas, editors. Abduction and Induction: essays on their relation and integration. Kluwer Academic Publishers, 2000.Google Scholar
  24. [24]
    A. Fred and A.K. Jain. Data clustering using evidence accumulation. In International Conference on Pattern Recognition, pages 276-280, Quebec City, Canada, 2002.Google Scholar
  25. [25]
    A. Fred and A.K. Jain. Robust data clustering. In Conf. on Computer Vision and Pattern Recognition, pages 442 -451, Madison - Wisconsin, USA, 2002.Google Scholar
  26. [26]
    K.S. Fu. Syntactic Pattern Recognition and Applications. Prentice-Hall, 1982.Google Scholar
  27. [27]
    K. Fukunaga. Introduction to Statistical Pattern Recognition. Academic Press, 1990.Google Scholar
  28. [28]
    G.M. Fung and O.L. Mangasarian. A Feature Selection Newton Method for Support Vector Machine Classification. Computational Optimization and Aplications, 28(2):185-202, 2004.zbMATHCrossRefMathSciNetGoogle Scholar
  29. [29]
    L. Goldfarb. On the foundations of intelligent processes - I. An evolving model for pattern recognition. Pattern Recognition, 23(6):595-616, 1990.CrossRefMathSciNetGoogle Scholar
  30. [30]
    L. Goldfarb, J. Abela, V.C. Bhavsar, and V.N. Kamat. Can a vector space based learning model discover inductive class generalization in a symbolic environment? Pattern Recognition Letters, 16(7):719-726, 1995.CrossRefGoogle Scholar
  31. [31]
    L. Goldfarb and D. Gay. What is a structural representation? Fifth variation. Technical Report TR05-175, University of New Brunswick, Fredericton, Canada, 2005.Google Scholar
  32. [32]
    L. Goldfarb and O. Golubitsky. What is a structural measurement process? Technical Report TR01-147, University of New Brunswick, Fredericton, Canada, 2001.Google Scholar
  33. [33]
    L. Goldfarb and J. Hook. Why classical models for pattern recognition are not pattern recognition models. In International Conference on Advances in Pattern Recognition, pages 405-414, 1998.Google Scholar
  34. [34]
    T. Graepel, R. Herbrich, and K. Obermayer. Bayesian transduction. In Advances in Neural Information System Processing, pages 456-462, 2000.Google Scholar
  35. [35]
    T. Graepel, R. Herbrich, B. Schölkopf, A. Smola, P. Bartlett, K.-R. Müller, K. Obermayer, and R. Williamson. Classification on proximity data with LP-machines. In International Conference on Artificial Neural Networks, pages 304-309, 1999.Google Scholar
  36. [36]
    U. Grenander. Abstract Inference. John Wiley & Sons, Inc., 1981.Google Scholar
  37. [37]
    P. Grünwald, I.J. Myung, and Pitt M., editors. Advances in Minimum Description Length: Theory and Applications. MIT Press, 2005.Google Scholar
  38. [38]
    B. Haasdonk. Feature space interpretation of SVMs with indefinite kernels. IEEE Transactions on Pattern Analysis and Machine Intelligence, 25(5):482-492, 2005.CrossRefGoogle Scholar
  39. [39]
    I. Hacking. The emergence of probability. Cambridge University Press, 1974.Google Scholar
  40. [40]
    G. Harman and S. Kulkarni. Reliable Reasoning: Induction and Statistical Learning Theory. MIT Press, to appear.Google Scholar
  41. [41]
    S. Haykin. Neural Networks, a Comprehensive Foundation, second edition. Prentice-Hall, 1999.Google Scholar
  42. [42]
    D. Heckerman. A tutorial on learning with Bayesian networks. In M. Jordan, editor, Learning in Graphical Models, pages 301-354. MIT Press, Cambridge, MA, 1999.Google Scholar
  43. [43]
    T.K. Ho and M. Basu. Complexity measures of supervised classification problems. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(3):289-300, 2002.CrossRefGoogle Scholar
  44. [44]
    A. K. Jain and B. Chandrasekaran. Dimensionality and sample size considerations in pattern recognition practice. In P. R. Krishnaiah and L. N. Kanal, editors, Handbook of Statistics, volume 2, pages 835-855. NorthHolland, Amsterdam, 1987.Google Scholar
  45. [45]
    A.K. Jain, R.P.W. Duin, and J. Mao. Statistical pattern recognition: A review. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(1):4-37, 2000.CrossRefGoogle Scholar
  46. [46]
    T. Joachims. Transductive inference for text classification using support vector machines. In I. Bratko and S. Dzeroski, editors, International Conference on Machine Learning, pages 200-209, 1999.Google Scholar
  47. [47]
    T. Joachims. Transductive learning via spectral graph partitioning. In International Conference on Machine Learning, 2003.Google Scholar
  48. [48]
    T.S. Kuhn. The Structure of Scientific Revolutions. University of Chicago Press, 1970.Google Scholar
  49. [49]
    L.I. Kuncheva. Combining Pattern Classifiers. Methods and Algorithms. Wiley, 2004.Google Scholar
  50. [50]
    J. Laub and K.-R. Müller. Feature discovery in non-metric pairwise data. Journal of Machine Learning Research, pages 801-818, 2004.Google Scholar
  51. [51]
    A. Marzal and E. Vidal. Computation of normalized edit distance and applications. IEEE Transactions on Pattern Analysis and Machine Intelligence, 15(9):926-932, 1993.CrossRefGoogle Scholar
  52. [52]
    R.S. Michalski. Inferential theory of learning as a conceptual basis for multistrategy learning. Machine Learning, 11:111-151, 1993.MathSciNetGoogle Scholar
  53. [53]
    T. Mitchell. Machine Learning. McGraw Hill, 1997.Google Scholar
  54. [54]
    Richard E. Neapolitan. Probabilistic reasoning in expert systems: theory and algorithms. John Wiley & Sons, Inc., New York, NY, USA, 1990.Google Scholar
  55. [55]
    C.S. Ong, S. Mary, X.and Canu, and Smola A.J. Learning with non-positive kernels. In International Conference on Machine Learning, pages 639-646, 2004.Google Scholar
  56. [56]
    E. Pękalska and R.P.W. Duin. The Dissimilarity Representation for Pattern Recognition. Foundations and Applications. World Scientific, Singapore, 2005.zbMATHGoogle Scholar
  57. [57]
    E. Pękalska, R.P.W. Duin, S. Günter, and H. Bunke. On not making dissimilarities Euclidean. In Joint IAPR International Workshops on SSPR and SPR, pages 1145-1154. Springer-Verlag, 2004.Google Scholar
  58. [58]
    E. Pękalska, P. Paclík , and R.P.W. Duin. A Generalized Kernel Approach to Dissimilarity Based Classification. Journal of Machine Learning Research, 2:175-211, 2002.zbMATHCrossRefGoogle Scholar
  59. [59]
    E. Pękalska, M. Skurichina, and R.P.W. Duin. Combining Dissimilarity Representations in One-class Classifier Problems. In Multiple Classifier Systems, pages 122-133. Springer-Verlag, 2004.Google Scholar
  60. [60]
    L.I. Perlovsky. Conundrum of combinatorial complexity. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(6):666-670, 1998.CrossRefGoogle Scholar
  61. [61]
    P. Pudil, J. Novovićova, and J. Kittler. Floating search methods in feature selection. Pattern Recognition Letters, 15(11):1119-1125, 1994.CrossRefGoogle Scholar
  62. [62]
    B. Ripley. Pattern Recognition and Neural Networks. Cambridge University Press, Cambridge, 1996.zbMATHGoogle Scholar
  63. [63]
    C.P. Robert. The Bayesian Choice. Springer-Verlag, New York, 2001.zbMATHGoogle Scholar
  64. [64]
    K.M. Sayre. Recognition, a study in the philosophy of artificial intelligence. University of Notre Dame Press, 1965.Google Scholar
  65. [65]
    M.I. Schlesinger and Hlavác. Ten Lectures on Statistical and Structural Pattern Recognition. Kluwer Academic Publishers, 2002.Google Scholar
  66. [66]
    B. Schölkopf and A.J. Smola. Learning with Kernels. MIT Press, Cambridge, 2002.Google Scholar
  67. [67]
    J. Shawe-Taylor and N. Cristianini. Kernel methods for pattern analysis. Cambridge University Press, UK, 2004.Google Scholar
  68. [68]
    M. Stone. Cross-validation: A review. Mathematics, Operations and Statistics, (9):127-140, 1978.zbMATHGoogle Scholar
  69. [69]
    D.M.J. Tax. One-class classification. Concept-learning in the absence of counter-examples. PhD thesis, Delft University of Technology, The Netherlands, 2001.Google Scholar
  70. [70]
    D.M.J. Tax and R.P.W. Duin. Support vector data description. Machine Learning, 54(1):45-56, 2004.zbMATHCrossRefGoogle Scholar
  71. [71]
    F. van der Heiden, R.P.W. Duin, D. de Ridder, and D.M.J. Tax. Classification, Parameter Estimation, State Estimation: An Engineering Approach Using MatLab. Wiley, New York, 2004.CrossRefGoogle Scholar
  72. [72]
    V. Vapnik. Estimation of Dependences based on Empirical Data. Springer Verlag, 1982.Google Scholar
  73. [73]
    V. Vapnik. Statistical Learning Theory. John Wiley & Sons, Inc., 1998.Google Scholar
  74. [74]
    L.-X. Wang and J.M. Mendel. Generating fuzzy rules by learning from examples. IEEE Transactions on Systems, Man, and Cybernetics, 22(6):1414-1427, 1992.CrossRefMathSciNetGoogle Scholar
  75. [75]
    S. Watanabe. Pattern Recognition, Human and Mechanical. John Wiley & Sons, 1985.Google Scholar
  76. [76]
    A. Webb. Statistical Pattern Recognition. John Wiley & Sons, Ltd., 2002.Google Scholar
  77. [77]
    S.M. Weiss and C.A. Kulikowski. Computer Systems That Learn. Morgan Kaufmann, 1991.Google Scholar
  78. [78]
    R.C. Wilson and E.R. Hancock. Structural matching by discrete relaxation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(6):634-648, 1997.CrossRefGoogle Scholar
  79. [79]
    R.C. Wilson, B. Luo, and E.R. Hancock. Pattern vectors from algebraic graph theory. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(7):1112-1124, 2005.CrossRefGoogle Scholar
  80. [80]
    S. Wolfram. A new kind of science. Wolfram Media, 2002.Google Scholar
  81. [81]
    D.H. Wolpert. The Mathematics of Generalization. Addison-Wesley, 1995.Google Scholar
  82. [82]
    R.R. Yager, M. Fedrizzi, and J. (Eds) Kacprzyk. Advances in the Dempster-Shafer Theory of Evidence. Wesley, 1994.Google Scholar
  83. [83]
    C.H. Yu. Quantitative methodology in the perspectives of abduction, deduction, and induction. In Annual Meeting of American Educational Research Association, San Francisco, CA, 2006.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Robert P. W. Duin
    • 1
  • Elżbieta Pekalska
    • 2
  1. 1.ICT group, Faculty of Electr. Eng., Mathematics and Computer ScienceDelft University of TechnologyThe Netherlands
  2. 2.School of Computer ScienceUniversity of ManchesterUK

Personalised recommendations