Conclusions and Future Research

  • Te-Won Lee
Chapter

Abstract

TTheories and applications of ICA were presented. The first part of the book focused on unsupervised learning algorithms for ICA. Based on fundamental theories in probabilistic models, information theory and artificial neural networks several unsupervised learning algorithms such as infomax, maximum likelihood estimation, negentropy maximization, nonlinear PCA, Bussgang algorithm and cumulant-based methods are presented that can perform ICA. Those seemingly different theories are reviewed and put in an information theoretic framework to unify several lines of research. An extension of the infomax algorithm of Bell and Sejnowski (1995) is presented that is able to blindly separate mixed signals with sub- and super-Gaussian source distributions (Girolami, 1997b; Lee et al., 1998b). The learning algorithms are furthermore extended to deal with the multichannel blind deconvolution problem. The use of filters allows the separation of voices recorded in a real environment (cocktail party problem). Although the ICA formulation has several constraints such as the linear model assumption, the number of sensors and the low-noise assumption, it can be demonstrated that new methods can loosen some constraints. In particular, an overcomplete representation of the ICA formulation (Lewicki and Sejnowski, 1998c) can be used to represent more basis functions than the dimensionality of the data. This method is therefore able to model and to infer more sources than sensors. The advantage of the inference model is that it includes a noisy ICA model and is therefore able to cope with additive noise. However, the overcomplete representation appears more sensitive to the source density mismatch than the complete ICA representation. A few steps towards nonlinear ICA were presented. This issue is ill conditioned and has in general not a unique solution. However, given appropriate constraints there are certain solvable models such as the two stage model Lee et al. (1997c) in which a nonlinear transfer function follows after the linear mixing.

Keywords

Unsupervised Learning Algorithm Nonlinear Transfer Function Infomax Algorithm Linear Model Assumption Cocktail Party Problem 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer Science+Business Media Dordrecht 1998

Authors and Affiliations

  • Te-Won Lee
    • 1
  1. 1.Computational Neurobiology LaboratoryThe Salk InstituteLa JollaUSA

Personalised recommendations