Non-linear mapping for feature extraction

  • P. Scheunders
  • S. De Backer
  • A. Naud
Poster Papers
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1451)

Abstract

Mapping techniques have been regularly used for visualization of high-dimensional data sets. In this paper, mapping to d ≥ 2 is studied, with the purpose of feature extraction. Two different nonlinear techniques are studied: self-organizing maps and auto-associative feedforward networks. The non-linear techniques are compared to linear Principal Component Analysis (PCA). A comparison with respect to feature extraction is made by evaluating the reduced feature sets ability to perform classification tasks. The experiments involve an artificial data set and grey-level and color texture data sets.

Keywords

Feature Extraction Classification Performance Output Space Feature Selection Technique Feature Extraction Technique 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    Devijver, P.A., Kittler, J.E.: Pattern recognition: a statitical approach. Prentice/Hall, Englewood Cliffs, New Jersey (1982) Chapter 5Google Scholar
  2. 2.
    Kruskal, J.B.: Non metric multidimensional scaling: a numerical method. Psychometrika. 29 (1964) 115–129Google Scholar
  3. 3.
    Sammon, J.W.: A nonlinear mapping for data analysis. IEEE Transactions on Computers. C-18 (1969) 401–409Google Scholar
  4. 4.
    Kohonen, T.: Self-Organizing Maps. Springer-Verlag (1995)Google Scholar
  5. 5.
    Baldi, B., Hornik, K.: Neural networks and principal component analysis: Learning from examples without local minima. Neural Networks. 2 (1989) 53–58CrossRefGoogle Scholar
  6. 6.
    Kraaijveld, M.A., Mao, J., Jain, A.K.: A nonlinear projection method based on kohonen's topology preserving maps. IEEE Transactions on Neural Networks. 6(3) (1995) 548–559CrossRefGoogle Scholar
  7. 7.
    Mao, J., Jain, A.K.: Artificial neural networks for feature extraction and multivariate data projection. IEEE Transactions on Neural Networks. 6(2) (1995) 296–317CrossRefGoogle Scholar
  8. 8.
    Kohonen, T.: The selbfgsf organizing map. Proc. of the IEEE. 78 (1990)Google Scholar
  9. 9.
    Kramer, M.A.: Nonlinear principal component analysis using autoassociative neural networks. AIChE Journal. 37(2) (1991) 233–243CrossRefGoogle Scholar
  10. 10.
    Liu, D.C., Nocedal J.: On the limited memory bfgs method for large scale optimization. Mathematical Programming. 45 (1989) 503–528CrossRefGoogle Scholar
  11. 11.
    Pudil, P., Novovicova, J., Kittler, J.: Floating search methods in feature selection. Pattern Recognition Letters. 15 (1994) 1119–1125CrossRefGoogle Scholar
  12. 12.
    Sarkal, N., Chaudhuri, B.B.: An efficient approach to estimate fractal dimension of textural images. Pattern Recognition. 25 (1992) 1035–1041CrossRefGoogle Scholar
  13. 13.
    Vautrot, P., Van de Wouwer, G., Scheunders, P., Livens, S., Van Dyck, D.: Continuous wavelets for rotation-invariant texture classification and segmentation. Unpublished (1997)Google Scholar
  14. 14.
    Van de Wouwer, G., Scheunders, P., Livens, S., Van Dyck, D.: Colour texture classification by wavelet energy-correlation signatures. Pattern Recognition (1997) to appearGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1998

Authors and Affiliations

  • P. Scheunders
    • 1
  • S. De Backer
    • 1
  • A. Naud
    • 2
  1. 1.Vision Lab, Department of PhysicsUniversity of AntwerpAntwerpenBelgium
  2. 2.Department of Computer MethodsUniversity Nicolai CoperniciToruńPoland

Personalised recommendations