Forming Different-Complexity Covariance-Model Subspaces through Piecewise-Constant Spectra for Hyperspectral Image Classification

  • Are Charles Jensen
  • Marco Loog
Part of the Lecture Notes in Computer Science book series (LNCS, volume 6688)

Abstract

A key factor in classifiers based on the normal (or Gaussian) distribution is the modeling of covariance matrices. When the number of available training pixels is limited, as often is the case in hyperspectral image classification, it is necessary to limit the complexity of these covariance models. An alternative to reducing the complexity uniformly over the whole feature space, is to form orthogonal subspaces and reduce the model complexity within them separately, e.g., forming full-complexity within-class, or interior-class, subspace models, and reduced-complexity exterior-class subspace models. We propose to use subspaces created by forming fewer and wider spectral bands, instead of the more general principal component analysis transform (PCA), in an attempt to exploit a-priori knowledge of the data to create more generalizable subspaces. We investigate the resulting classifiers by studying their performances on four hyperspectral data sets. On each data set, experiments where run using different training set sizes. The results indicate that the classifiers seem to benefit from using this more data-specific approach to forming subspaces.

Keywords

Principal Component Analysis Training Sample Hyperspectral Image Primary Space Training Sample Size 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Duda, R.O., Hart, P.E., Stork, D.G.: Pattern Classification, 2nd edn. Wiley Interscience, Hoboken (2000)MATHGoogle Scholar
  2. 2.
    Frank, I.E.: Dasco: a new classification method. Chemometrics and Intelligent Laboratory Systems 4(3), 215–222 (1988)CrossRefGoogle Scholar
  3. 3.
    Gamba, P.: A collection of data for urban area characterization. In: Proc. IEEE Geoscience and Remote Sensing Symposium (IGARSS 2004), pp. 69–72 (2004)Google Scholar
  4. 4.
    Ham, J., Chen, Y., Crawford, M.M., Ghosh, J.: Investigation of the random forest framework for classification of hyperspectral data. IEEE Trans. Geosci. Remote Sensing 43(3), 492–501 (2005)CrossRefGoogle Scholar
  5. 5.
    Jensen, A.C., Solberg, A.S.: Fast hyperspectral feature reduction using piecewise constant function approximations. IEEE Geoscience and Remote Sensing Letters 4(4), 547–551 (2007)CrossRefGoogle Scholar
  6. 6.
    Landgrebe, D.A.: Signal Theory Methods in Multispectral Remote Sensing. Wiley Interscience, Hoboken (2003)CrossRefGoogle Scholar
  7. 7.
    Næs, T., Indahl, U.: A unified description of classical classification methods for multicollinear data. Journal of chemometrics 12(3), 205–220 (1998)CrossRefGoogle Scholar
  8. 8.
    Wold, S.: Pattern recognition by means of disjoint principal components models. Pattern Recognition 8(3), 127–139 (1976)CrossRefMATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2011

Authors and Affiliations

  • Are Charles Jensen
    • 1
  • Marco Loog
    • 2
  1. 1.Department of InformaticsUniversity of OsloNorway
  2. 2.Pattern Recognition LaboratoryDelft University of TechnologyThe Netherlands

Personalised recommendations