Advertisement

Seismic Principal Components Analysis Using Neural Networks

  • Kou-Yuan Huang
Part of the Modern Approaches in Geophysics book series (MAGE, volume 21)

Abstract

A neural network is described which uses an unsupervised generalized Hebbian algorithm (GHA) to find the principal eigenvectors of the covariance matrix for different types of seismogram. Principal components analysis (PCA) using the GHA network enables the extraction of information regarding seismic reflections and uniform neighboring traces. The seismic data analyzed are seismic traces with 20, 25, and 30 Hz Ricker wavelets. The GHA network is also applied to analyze (a) fault, reflection and diffraction patterns after NMO correction, (b) bright spot patterns, and (c) a real seismogram from the Mississippi Canyon. The properties of high amplitude, low frequency, and polarity reversal are observed from projections on the principal eigenvectors. The GHA network also provides significant seismic data compression.

Keywords

Seismic Data Principal Eigenvector Seismic Trace Ricker Wavelet Input Data Vector 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Baldi, P., and Hornik, K., 1989, Neural network and principal component analysis: Learning from examples without local minima: Neural Networks, 2, 53–58.Google Scholar
  2. Bannur, S., and Azimi-Sadjadi, M. R., 1995, Principal component extraction using recursive least squares learning: IEEE Trans. Neural Networks, 6, 457–469.Google Scholar
  3. Carnahan, B., Luther, H. A., and Wilkes, J. 0., 1969, Applied numerical methods: John Wiley und Sons, Inc., New York. 122Google Scholar
  4. Chien, Y. T., and Fu, K. S., 1967, On the generalized Karhunen-Loeve expansion: IEEE Trans. Information Theory, 13, 518–520.CrossRefGoogle Scholar
  5. Dobrin, M. B., 1976, Introduction to geophysical prospecting: 3rd ed., New York, McGraw-Hill, ch. 10.Google Scholar
  6. Fukunaga, K., and Koontz, W. L. G., 1969, Application of the Karhunen-Loeve expansion to feature selection and ordering: IEEE Trans. Comput., 19, 311–318.Google Scholar
  7. Hagen, D. C., 1982, The application of principal components analysis to seismic data sets: Geoexploration, 20, 93–111.Google Scholar
  8. Hotelling, H., 1933, Analysis of a complex statistical variables into principal components: J. Educ. Psychol., 24, 417–441 and 498–520.Google Scholar
  9. Jain, A. K., 1976, A fast Karhunen-Loeve transform for a class of random processes: IEEE Trans. Commun., 24, 1023–1029.CrossRefGoogle Scholar
  10. Jain, A. K., 1977, A fast Karhunen-Loeve transform for digital restoration of images by white and colored noise: IEEE Trans. Comput., 26, 560–571.CrossRefGoogle Scholar
  11. Jones, I. F., 1985, Applications of the Karhunen-Loeve transform in reflection seismology: Ph.D. thesis, The University of British Columbia, Vancouver, Canada.Google Scholar
  12. Karhunen, J., and Oja, E., 1981, Optimal adaptive compression for high-dimensional data: Proceedings 2nd Scand. Conf. on Image Analysis, Helsinki, Finland, 152–157.Google Scholar
  13. Karhunen, J., and Oja, E., 1982, New methods for stochastic approximation of truncated Karhunen-Loeve expansions: Proceedings 6th Int. Conf. on Pattern Recognition, Munich, FRG, 550–553.Google Scholar
  14. Krogh, A., and Hertz, J., 1990, A. Hebbian learning of principal components, in Eckmiller, R., Hartmann, G. and Hauske, G., Ed., Parallel Processing in Neural Systems and Computers: Amsterdam, Elsevier, 183–186.Google Scholar
  15. Kung, S. Y., 1993, Digital neural networks: Prentice Hall, ch.8.Google Scholar
  16. Oja, E., 1982, A simplified neuron model as a principal component analyzer: Journal of Mathematics and Biology, 15, 267–273.Google Scholar
  17. Oja, E., 1985, On stochastic approximation of the eigenvectors and eigenvalues of the expectation of a random matrix: Journal of Mathematical Analysis and Applications, 106, 69–84.CrossRefGoogle Scholar
  18. Oja, E., and Karhunen, J., 1980, Recursive construction of Karhunen-Loeve expansions for pattern recognition purposes: Proceedings 5th Int. Conf. on Pattern Recognition, Miami Beach, Fl., 1215–1218.Google Scholar
  19. Pelat, D., 1974, Karhunen-Loeve series expansion: a new approach for studying astrophysical data: Astron. and Astrophys, 33, 321–329.Google Scholar
  20. Peper, F., and Noda, H., 1996, A symmetric linear neural network that learns principal components and their variances: IEEE Trans. Neural Networks, 7, 1042–1047.Google Scholar
  21. Sanger, T. D., 1989a, Optimal unsupervised learning in a single-layer linear feedforward neural network: Neural Networks, 2, 459–473.CrossRefGoogle Scholar
  22. Sanger, T. D., 1989b, An optimality principle for unsupervised learning, in Touretzky, D. S., Ed., Advances in neural information processing systems, Vol. I: San Mateo, CA: Morgan Kaufmann, 11–19.Google Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2003

Authors and Affiliations

  • Kou-Yuan Huang
    • 1
  1. 1.Department of Computer and Information ScienceNational Chiao Tung UniversityHsinchuTaiwan

Personalised recommendations