Fahlman-Type Activation Functions Applied to Nonlinear PCA Networks Provide a Generalised Independent Component Analysis

  • M. Girolami
  • C. Fyfe
Conference paper


It has been shown experimentally that Oja’s nonlinear principal component analysis (PCA) algorithm is capable of performing an independent component analysis (ICA) on a specific data set [7]. However, the dynamic stability requirements of the nonlinear PCA algorithm restrict its use to data which has sub-gaussian probability densities [6]. The restriction is particularly severe as this precludes the application of the algorithm from performing ICA on naturally occurring data such as speech, music and certain visual images. We have shown that the nonlinear PCA algorithm can be considered as minimising an information theoretic contrast function and develop a more direct link between ICA and the algorithm function [6]. To remove the sub-gaussian restriction and enable a generalised ICA which will span the full range of possible data kurtosis, we propose the use of Fahlman type activation functions [2] in the nonlinear PCA algorithm. We show that variants of these functions satisfy all the dynamic and asymptotic stability requirements of the algorithm and successfully remove the sub-gaussian restriction. We also report on simulations which demonstrate the blind separating ability of the nonlinear PCA algorithm with the Fahlman type functions on mixtures of super-Gaussian data (natural speech).


Independent Component Analysis Independent Component Analysis Natural Speech Robust Principal Component Analysis Blind Separation 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [1]
    P. Comon. Independent component analysis, a new concept? Signal Processing, 36:287–314, 1994.MATHCrossRefGoogle Scholar
  2. [2]
    S.E. Fahlman. Faster-learning variations on back-propagation: an empirical study. In Proc. Connectionist Models Summer School, pages 38–51. Morgan-Kaufmann, 1988.Google Scholar
  3. [3]
    C. Fyfe and R. Baddeley. Non-linear data structure extraction using simple Hebbian networks. Biological Cybernetics, 72(6):533–541, 1995.MATHCrossRefGoogle Scholar
  4. [4]
    M. Girolami and C. Fyfe. Higher order cumulant maximisation using nonlinear Hebbian and anti-Hebbian learning for adaptive blind separation of source signals. In Proc. IWSIP-96, IEEE/IEE International Workshop on Signal and Image Processing, pages 141–144. Elsevier, 1996.Google Scholar
  5. [5]
    M. Girolami and C. Fyfe. Kurtosis extrema and identification of independent components: A network approach. In Proceedings ICASSP-97, 1997. To appear.Google Scholar
  6. [6]
    M. Girolami and C. Fyfe. Stochastic ICA contrast maximisation using Oja’s nonlinear PCA algorithm. International Journal of Neural Systems, 1997. In press.Google Scholar
  7. [7]
    J. Karhunen. Neural approaches to independent component analysis and source separation. In Proc. ESANN96, 1996.Google Scholar
  8. [8]
    J. Karhunen and J. Joutensalo. Representation and separation of signals using nonlinear PCA type learning. Neural Networks, 7(1):113–127, 1994.CrossRefGoogle Scholar
  9. [9]
    E. Oja. The nonlinear PCA learning rule and signal separation-mathematical analysis. Technical Report A26, Helsinki University of Technology, 1995.Google Scholar

Copyright information

© Springer-Verlag Wien 1998

Authors and Affiliations

  • M. Girolami
    • 1
  • C. Fyfe
    • 1
  1. 1.Department of Computing and Information SystemsUniversity of PaisleyPaisleyScotland

Personalised recommendations