Classification assigns objects to various classes based on measured features. The features are considered as a feature vector in feature space. It is important to select the most informative features and/or combine features for successful classification. Typically a sample set (the training set) is selected to train the classifier, which is then applied to other objects (the test set). Supervised learning uses a labeled training set, in which it is known to which class the objects belong, and is an inductive reasoning process. There are a variety of approaches to classification; statistical approaches, characterized by an underlying probability model, are very important. We will consider a number of robust features and examples based on shape, size, and topology to classify various objects.
KeywordsEntropy Covariance Convolution
- Anderson, J., Pellionisz, A., Rosenfeld, E.: Neurocomputing 2: Directions for Research. MIT, Cambridge, MA (1990)Google Scholar
- Cristianini, N., Shawe-Taylor, J.: An Introduction to Support Vector Machines. Cambridge University Press, Cambridge (2000)Google Scholar
- Jain, A.K., Duin, R.P.W., Mao, J.: Statistical pattern recognition: a review. IEEE Trans. Pattern Anal. Mach. Intell. 33, 1475–1485 (2000)Google Scholar
- Jensen, F.V.: An Introduction to Bayesian Networks. UCL Press, London (1996)Google Scholar
- Otsu, N.: A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man Cybern. SMC-9, 62–66 (1979)Google Scholar
- Ripley, B.: Statistical aspects of neural networks. In: Bornndorff-Nielsen, U., Jensen, J., Kendal, W. (eds.) Networks and Chaos - Statistical and Probabilistic Aspects, pp. 40–123. Chapman and Hall, London (1993)Google Scholar