Classification

  • Geoff Dougherty
Chapter

Abstract

Classification assigns objects to various classes based on measured features. The features are considered as a feature vector in feature space. It is important to select the most informative features and/or combine features for successful classification. Typically a sample set (the training set) is selected to train the classifier, which is then applied to other objects (the test set). Supervised learning uses a labeled training set, in which it is known to which class the objects belong, and is an inductive reasoning process. There are a variety of approaches to classification; statistical approaches, characterized by an underlying probability model, are very important. We will consider a number of robust features and examples based on shape, size, and topology to classify various objects.

Keywords

Entropy Covariance Convolution 

References

  1. Aha, D.: Feature weighting for lazy learning algorithms. In: Liu, H., Motoda, H. (eds.) Feature Extraction, Construction and Selection: A Data Mining Perspective, pp. 13–32. Kluwer, Norwell, MA (1998)CrossRefGoogle Scholar
  2. Anderson, J., Pellionisz, A., Rosenfeld, E.: Neurocomputing 2: Directions for Research. MIT, Cambridge, MA (1990)Google Scholar
  3. Barto, A.G., Sutton, R.S.: Reinforcement learning in artificial intelligence. In: Donahue, J.W., Packard Dorsal, V. (eds.) Neural Network Models of Cognition, pp. 358–386. Elsevier, Amsterdam (1997)CrossRefGoogle Scholar
  4. Batista, G., Monard, M.: An analysis of four missing data treatment methods for supervised learning. Appl. Artif. Intell. 17, 519–533 (2003)CrossRefGoogle Scholar
  5. Cristianini, N., Shawe-Taylor, J.: An Introduction to Support Vector Machines. Cambridge University Press, Cambridge (2000)Google Scholar
  6. Csiszar, I.: Maxent, mathematics, and information theory. In: Hanson, K.M., Silver, R.N. (eds.) Maximum Entropy and Bayesian Methods, pp. 35–50. Kluwer, Norwell, MA (1996)CrossRefGoogle Scholar
  7. De Mantaras, R.L., Armengol, E.: Machine learning from examples: inductive and lazy methods. Data Knowl. Eng. 25, 99–123 (1998)CrossRefMATHGoogle Scholar
  8. Dubes, R.C., Jain, A.K.: Clustering techniques: the user’s dilemma. Pattern Recognit. 8, 247–290 (1976)CrossRefGoogle Scholar
  9. Fu, K.S.: Syntactic Pattern Recognition and Applications. Prentice-Hall, Englewood Cliffs (1982)MATHGoogle Scholar
  10. Hodge, V.J., Austin, J.: A survey of outlier detection methodologies. Artif. Intell. Rev. 22, 85–126 (2004)CrossRefMATHGoogle Scholar
  11. Jain, A.K., Duin, R.P.W., Mao, J.: Statistical pattern recognition: a review. IEEE Trans. Pattern Anal. Mach. Intell. 33, 1475–1485 (2000)Google Scholar
  12. Jensen, F.V.: An Introduction to Bayesian Networks. UCL Press, London (1996)Google Scholar
  13. Markovitch, S., Rosenstein, D.: Feature generation using general constructor functions. Mach. Learn. 49, 59–98 (2002)CrossRefMATHGoogle Scholar
  14. Mitchell, T.: Machine Learning. McGraw Hill, New York (1997)MATHGoogle Scholar
  15. Otsu, N.: A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man Cybern. SMC-9, 62–66 (1979)Google Scholar
  16. Perlovsky, L.I.: Conundrum of combinatorial complexity. IEEE Trans. Pattern Anal. Mach. Intell. 20, 666–670 (1998)CrossRefGoogle Scholar
  17. Ripley, B.: Statistical aspects of neural networks. In: Bornndorff-Nielsen, U., Jensen, J., Kendal, W. (eds.) Networks and Chaos - Statistical and Probabilistic Aspects, pp. 40–123. Chapman and Hall, London (1993)Google Scholar
  18. Yu, L., Liu, H.: Efficient feature selection via analysis of relevance and redundancy. J. Mach. Learn. Res. 5, 1205–1224 (2004)MATHGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2013

Authors and Affiliations

  • Geoff Dougherty
    • 1
  1. 1.Applied Physics and Medical ImagingCalifornia State University, Channel IslandsCamarilloUSA

Personalised recommendations