# Seismic Principal Components Analysis Using Neural Networks

## Abstract

A neural network is described which uses an unsupervised generalized Hebbian algorithm (GHA) to find the principal eigenvectors of the covariance matrix for different types of seismogram. Principal components analysis (PCA) using the GHA network enables the extraction of information regarding seismic reflections and uniform neighboring traces. The seismic data analyzed are seismic traces with 20, 25, and 30 Hz Ricker wavelets. The GHA network is also applied to analyze (a) fault, reflection and diffraction patterns after NMO correction, (b) bright spot patterns, and (c) a real seismogram from the Mississippi Canyon. The properties of high amplitude, low frequency, and polarity reversal are observed from projections on the principal eigenvectors. The GHA network also provides significant seismic data compression.

## Keywords

Seismic Data Principal Eigenvector Seismic Trace Ricker Wavelet Input Data Vector## Preview

Unable to display preview. Download preview PDF.

## References

- Baldi, P., and Hornik, K., 1989, Neural network and principal component analysis: Learning from examples without local minima: Neural Networks,
**2**, 53–58.Google Scholar - Bannur, S., and Azimi-Sadjadi, M. R., 1995, Principal component extraction using recursive least squares learning: IEEE Trans. Neural Networks,
**6**, 457–469.Google Scholar - Carnahan, B., Luther, H. A., and Wilkes, J. 0., 1969, Applied numerical methods: John Wiley und Sons, Inc., New York. 122Google Scholar
- Chien, Y. T., and Fu, K. S., 1967, On the generalized Karhunen-Loeve expansion: IEEE Trans. Information Theory,
**13**, 518–520.CrossRefGoogle Scholar - Dobrin, M. B., 1976, Introduction to geophysical prospecting: 3rd ed., New York, McGraw-Hill, ch. 10.Google Scholar
- Fukunaga, K., and Koontz, W. L. G., 1969, Application of the Karhunen-Loeve expansion to feature selection and ordering: IEEE Trans. Comput.,
**19**, 311–318.Google Scholar - Hagen, D. C., 1982, The application of principal components analysis to seismic data sets: Geoexploration,
**20**, 93–111.Google Scholar - Hotelling, H., 1933, Analysis of a complex statistical variables into principal components: J. Educ. Psychol.,
**24**, 417–441 and 498–520.Google Scholar - Jain, A. K., 1976, A fast Karhunen-Loeve transform for a class of random processes: IEEE Trans. Commun.,
**24**, 1023–1029.CrossRefGoogle Scholar - Jain, A. K., 1977, A fast Karhunen-Loeve transform for digital restoration of images by white and colored noise: IEEE Trans. Comput.,
**26**, 560–571.CrossRefGoogle Scholar - Jones, I. F., 1985, Applications of the Karhunen-Loeve transform in reflection seismology: Ph.D. thesis, The University of British Columbia, Vancouver, Canada.Google Scholar
- Karhunen, J., and Oja, E., 1981, Optimal adaptive compression for high-dimensional data: Proceedings 2nd Scand. Conf. on Image Analysis, Helsinki, Finland, 152–157.Google Scholar
- Karhunen, J., and Oja, E., 1982, New methods for stochastic approximation of truncated Karhunen-Loeve expansions: Proceedings 6th Int. Conf. on Pattern Recognition, Munich, FRG, 550–553.Google Scholar
- Krogh, A., and Hertz, J., 1990, A. Hebbian learning of principal components,
*in*Eckmiller, R., Hartmann, G. and Hauske, G., Ed., Parallel Processing in Neural Systems and Computers: Amsterdam, Elsevier, 183–186.Google Scholar - Kung, S. Y., 1993, Digital neural networks: Prentice Hall, ch.8.Google Scholar
- Oja, E., 1982, A simplified neuron model as a principal component analyzer: Journal of Mathematics and Biology,
**15**, 267–273.Google Scholar - Oja, E., 1985, On stochastic approximation of the eigenvectors and eigenvalues of the expectation of a random matrix: Journal of Mathematical Analysis and Applications,
**106**, 69–84.CrossRefGoogle Scholar - Oja, E., and Karhunen, J., 1980, Recursive construction of Karhunen-Loeve expansions for pattern recognition purposes: Proceedings 5th Int. Conf. on Pattern Recognition, Miami Beach, Fl., 1215–1218.Google Scholar
- Pelat, D., 1974, Karhunen-Loeve series expansion: a new approach for studying astrophysical data: Astron. and Astrophys,
**33**, 321–329.Google Scholar - Peper, F., and Noda, H., 1996, A symmetric linear neural network that learns principal components and their variances: IEEE Trans. Neural Networks,
**7**, 1042–1047.Google Scholar - Sanger, T. D., 1989a, Optimal unsupervised learning in a single-layer linear feedforward neural network: Neural Networks,
**2**, 459–473.CrossRefGoogle Scholar - Sanger, T. D., 1989b, An optimality principle for unsupervised learning,
*in*Touretzky, D. S., Ed., Advances in neural information processing systems, Vol. I: San Mateo, CA: Morgan Kaufmann, 11–19.Google Scholar