Skip to main content

From neural principal components to neural independent components

  • Part IV: Signal Processing: Blind Source Separation, Vector Quantization, and Self-Organization
  • Conference paper
  • First Online:
Artificial Neural Networks — ICANN'97 (ICANN 1997)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1327))

Included in the following conference series:

Abstract

Several neural network learning rules for the linear Principal Component Analysis (PCA) have been shown to be closely related to classical PCA optimization criteria. These learning rules and the corresponding criteria are extended to versions containing nonlinear functions. It can be shown that the criteria and the learning functions solve the blind source separation (BSS) problem for the linear memoryless mixture model, based on the statistical independence of the source signals. This bottom-up approach to the BSS and Independent Component Analysis (ICA) problems allows us to choose the nonlinear functions so that the learning rules not only produce independent components, but also have other desirable properties like robustness, contrary to the often used polynomial functions ensuing from cumulant expansions. Also fast batch versions of the learning rules are reviewed.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. S. Amari, A. Cichocki, and H.H. Yang. A new learning algorithm for blind source separation. In Advances in Neural Information Processing 8 (Proc. NIPS'95), Cambridge, MA, 1996. MIT Press.

    Google Scholar 

  2. P. Baldi and K. Hornik, Neural networks and principal components analysis: learning from examples without local minima. Neural Networks 2, 1989, 52–58.

    Google Scholar 

  3. A.J. Bell and T.J. Sejnowski. An information-maximization approach to blind separation and blind deconvolution. Neural Computation, 7:1129–1159, 1995.

    Google Scholar 

  4. J.-F. Cardoso. Eigen-structure of the fourth-order cumulant tensor with application to the blind source separation problem. In Proc. IEEE Int. Conf. on Acoustics, Speech, and Signal Processing, pages 2655–2658, Albuquerque, NM, USA, April 3–6 1990.

    Google Scholar 

  5. J.-F. Cardoso. Iterative techniques for blind source separation using only fourth-order cumulants. In Proc. EUSIPCO, pages 739–742, 1992.

    Google Scholar 

  6. J.-F. Cardoso and B. H. Laheld. Equivariant adaptive source separation. IEEE Trans. on Signal Processing, 44: 3017–3030, 1996.

    Google Scholar 

  7. Y. Chauvin, Principal component analysis by gradient descent on a constrained linear Hebbian cell. Proc. IJCNN, Washington DC, 1989, 373–380.

    Google Scholar 

  8. A. Chicocki, R. Unbehauen, and E. Rummert, Robust learning algorithm for blind separation of signals. Electronics Letters 30, 1994, pp. 1386–1387.

    Google Scholar 

  9. A. Chicocki, W. Kasprzak, and S. Amari, Multi-layer neural networks with a local adaptive learning rule for blind separation of source signals. In Proc. NOLTA'95, pages 61–65, 1995.

    Google Scholar 

  10. A. Chcocki, S. I. Amari, and R. Thawonmas. Blind signal extraction using self-adaptive non-linear hebbian learning rule. In Proc. NOLTA'96, pages 377–380, 1996.

    Google Scholar 

  11. P. Comon. Independent component analysis-a new concept? Signal Processing, 36:287–314, 1994.

    Google Scholar 

  12. G. Deco and D. Obradovic, An information-theoretic approach to neural computing. Springer, New York, 1996.

    Google Scholar 

  13. N. Delfosse and P. Loubaton, Adaptive blind separation of independent sources: a deflation approach. Signal Processing 45, 1995, pp. 59–83.

    Google Scholar 

  14. K. Diamantaras and S. Y. Kung, Principal component neural networks-Theory and applications. Wiley, New York, 1996.

    Google Scholar 

  15. P. Földiàk, Adaptive network for optimal linear feature extraction. Proc. IJCNN, Washington, DC, 1989, 401–405.

    Google Scholar 

  16. C. Fyfe, D. McGregor, and R. Baddeley, Exploratory projection pursuit: an artificial neural network approach. Dept. Comp. Science, U. of Strathclyde Res. Rep. 94/160, 1994.

    Google Scholar 

  17. K. Hornik and C. Kuan, Convergence analysis of local feature extraction algorithms. Neural Networks 5, pp. 229–240, 1991.

    Google Scholar 

  18. A. Hyvärinen and E. Oja. A fast fixed-point algorithm for independent component analysis. To appear in Neural Computation, 1997.

    Google Scholar 

  19. A. Hyvärinen and E. Oja. Simple neuron models for independent component analysis. To appear in Int. J. Neural Systems, 1997.

    Google Scholar 

  20. A. Hyvärinen. A family of fixed-point algorithms for independent component analysis. Technical Report A40, Helsinki University of Technology, Laboratory of Computer and Information Science, 1996.

    Google Scholar 

  21. A. Hyvärinen and E. Oja. Independent component analysis by general non-linear Hebbian-like learning rules. Technical Report A41, Helsinki University of Technology, Laboratory of Computer and Information Science, 1996.

    Google Scholar 

  22. A. Hyvärinen and E. Oja. A neuron that learns to separate one independent component from linear mixtures. In Proc. IEEE Int. Conf. on Neural Networks, pages 62–67, Washington, D.C., June 3–6 1996.

    Google Scholar 

  23. A. Hyvärinen and E. Oja. One-unit learning rules for independent component analysis. In NIPS*9, Denver, Colorado, 1996.

    Google Scholar 

  24. A. Hyvärinen. A family of fixed-point algorithms for independent component analysis. Proc. ICASSP'97, Munich, Germany, 1997.

    Google Scholar 

  25. C. Jutten and J. Herault, Independent component analysis (INCA) versus independent component analysis. Signal Processing IV: Theories and Applications (J. Lacoume et al, eds.), pp. 643–646, Elsevier, 1988.

    Google Scholar 

  26. C. Jutten and J. Herault. Blind separation of sources, part I: An adaptive algorithm based on neuromimetic architecture. Signal Processing, 24:1–10, 1991.

    Google Scholar 

  27. J. Karhunen and J. Joutsensalo, Tracking of sinusoidal frequencies by neural network learning algorithms. Proc. ICASSP-91, Toronto, Canada, 1991.

    Google Scholar 

  28. J. Karhunen and J. Joutsensalo. Representation and separation of signals using nonlinear PCA type learning. Neural Networks, 7(1):113–127, 1994.

    Google Scholar 

  29. J. Karhunen and J. Joutsensalo, Generalizations of Principal Component Analysis, optimization problems, and neural networks. Neural Networks 8, pp. 549–562, 1995.

    Google Scholar 

  30. J. Karhunen, E. Oja, L. Wang, R. Vigario, and J. Joutsensalo, A class of neural networks for Independent Component Analysis. IEEE Trans. Neural Networks 8, pp. 486–504, 1997.

    Google Scholar 

  31. J. Karhunen, A. Hyväxinen, R. Vigario, J. Hurri, and E. Oja. Applications of neural blind separation to signal and image processing. Proc. ICASSP'97, Munich, Germany, 1997.

    Google Scholar 

  32. A. Krogh and J. Hertz, Hebbian learning of principal components. Nordita preprint 89/50 S.

    Google Scholar 

  33. S. Kung and K. Diamantras, A neural network learning algorithm for adaptive principal component extraction (APEX). Proc. ICASSP-90, Albuquerque, NM, 1990, 861–864.

    Google Scholar 

  34. R. Linsker, Self-organization in a perceptual network. Computer, 1988, 105–117.

    Google Scholar 

  35. E. Oja, A simplified neuron model as a principal components analyzer. J. Math. Biol. 15, 1982, 267–273.

    Google Scholar 

  36. E. Oja, Subspace Methods of Pattern Recognition. RSP and J. Wiley, 1983.

    Google Scholar 

  37. E. Oja and J. Karhunen, On stochastic approximation of the eigenvectors and eigenvalues of the expectation of a random matrix. J. Math. Anal. Appl. 106, 1985, 69–84.

    Google Scholar 

  38. E. Oja, Neural networks, principal components, and subspaces. Int. J. Neural Systems 1, 1989, 61–68.

    Google Scholar 

  39. E. Oja, H. Ogawa, and J. Wangviwattana, Learning in nonlinear constrained Hebbian networks. Proc. ICANN-91, Helsinki, Finland, 1991.

    Google Scholar 

  40. E. Oja. Principal components, minor components, and linear neural networks. Neural Networks, 5:927–935, 1992.

    Google Scholar 

  41. E. Oja. The nonlinear PCA learning rule and signal separation-mathematical analysis. To appear in Neurocomputing, 1997.

    Google Scholar 

  42. E. Oja and J. Karhunen. Signal separation by nonlinear hebbian learning. In M. Palaniswami, Y. Attikiouzel, R. Marks, D. Fogel, and T. Fukuda, editors, Computational Intelligence-a Dynamic System Perspective, pages 83–97. IEEE Press, New York, 1995.

    Google Scholar 

  43. E. Oja and L. Wang, Robust fitting by nonlinear neural units. Neural Networks 9, pp. 435–444, 1996.

    Google Scholar 

  44. E. Oja and A. Hyväxinen. Blind signal separation by neural networks. In Proc. Int. Conf. on Neural Information Processing, pages 7–14, Hong Kong, 1996.

    Google Scholar 

  45. J. Rubner and P. Tavan, A self-organizing network for principal components analysis. Europhys. Lett. 10, 1989, 693–689.

    Google Scholar 

  46. T.D. Sanger, Optimal unsupervised learning in a single-layer linear feedforward network. Neural Networks 2, 1989, 459–473.

    Google Scholar 

  47. R. Williams, Feature discovery through error-correcting learning. Tech. Rep. 8501, UCSD, Institute of Cognitive Science, 1985.

    Google Scholar 

  48. L. Xu, E. Oja and C. Suen, Modified Hebbian learning for curve and surface fitting. Neural Networks 5, pp. 441–457, 1992.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Wulfram Gerstner Alain Germond Martin Hasler Jean-Daniel Nicoud

Rights and permissions

Reprints and permissions

Copyright information

© 1997 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Oja, E., Karhunen, J., Hyvärinen, A. (1997). From neural principal components to neural independent components. In: Gerstner, W., Germond, A., Hasler, M., Nicoud, JD. (eds) Artificial Neural Networks — ICANN'97. ICANN 1997. Lecture Notes in Computer Science, vol 1327. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0020207

Download citation

  • DOI: https://doi.org/10.1007/BFb0020207

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-63631-1

  • Online ISBN: 978-3-540-69620-9

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics