Advertisement

Nonlinear Multivariate Analysis by Neural Network Models

  • Yoshio Takane
Part of the Studies in Classification, Data Analysis, and Knowledge Organization book series (STUDIES CLASS)

Summary

Feedforward neural network (NN) models approximate nonlinear functions that connect inputs to outputs by repeated applications of simple nonlinear transformations. By combining this feature of NN models with traditional multivariate analysis (MVA) techniques, nonlinear versions of the latter can readily be constructed. In this paper, we examine various properties of nonlinear MVA by NN models in two specific contexts: Cascade Correlation (CC) networks for nonlinear discriminant analysis simulating the learning of personal pronouns, and a five-layer auto-associative network for nonlinear principal component analysis (PCA) finding two defining features of cylinders. We analyze the mechanism of function approximations, focussing, in particular, on how interaction effects among input variables are captured by superpositions of sigmoidal transformations.

Keywords

Hide Layer Input Pattern Hide Unit Output Unit Training Pattern 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Anderson, T. W. (1951). Estimating linear restrictions on regression coefficients for multivariate normal distributions. Annals of Mathematical Statistics, 22, 327–351.MATHCrossRefGoogle Scholar
  2. Baldi, P. and: Hornik, K. (1989). Neural network and principal component analysis: Learning from examples without local minima. Neural Network, 2, 53–58.Google Scholar
  3. DeMers, D., and: Cottrell, G. (1993). Non-linear dimension reduction. In: Neural Information Processing Systems 5,Hanson, S. J. et al. (eds.),.580–587, Morgan Kaufmann, San Mateo, CA.Google Scholar
  4. Fahlman, S. E., and Lebiere, C. (1990). The cascade correlation learning architecture. In: Neural Information Processing Systems 2, Touretzky, D. S. (eds.), 52–1–532, Morgan Kaufmann, San Mateo, CA.Google Scholar
  5. Irie, B. and Kawato, M. (1989). Tasti plseputoron ni yoru naibu hytigen no kakutoku. {Ac- quisition of internal representation by multi-layered perceptron.] Shingakugihö, NC89–15.Google Scholar
  6. Kruskal, J. B., and Shepard, R. N. (1974). A nonmetric variety of liner factor analysis. Psychometrika,39, 123–157.Google Scholar
  7. Oja, E. (1991). Data compression, feature extraction, and autoassociation in feedforward neural networks. In: Artificial Neural Networks, Kohonen, T. et al. (eds.), 737–745.Google Scholar
  8. Oshima-Takane, Y. (1988). Children learn from speech not addressed to them: The case of personal pronouns. Journal of Child Language, 15, 94–108.CrossRefGoogle Scholar
  9. Oshima-Takane, Y. (1992). Analysis of pronomial errors: A case study. Journal of Child Language, 19, 111–131.CrossRefGoogle Scholar
  10. Oshima-Takane, Y. et al, (1995). The learning of personal pronouns: Network models and analysis. McGill University Cognitive Science Center Technical Report, 2095, McGill University.Google Scholar
  11. Oshima-Takane, Y. et al. (1996). Birth order effects on early language development: Do second born children learn from overheard speech? Child Development, 67, 621–634.CrossRefGoogle Scholar
  12. Rao, C. R. (1964). The use and interpretation of principal component analysis in applied research. Sankhyä A, 26, 329–3. 58.Google Scholar
  13. Takane, Y. (1995). Nonlinear multivariate analysis by neural network models. Proceedings of the 63rd Annual Meeting of’ the.Japan Statistical Society, 258–260.Google Scholar
  14. Takane, Y. et al. (1995). Network analyses: The case of first and second person pronouns. Proceedings of the 1995 IEEE International Conference on Systems, ’Ilan and Cybernetics. 3594–3599.Google Scholar
  15. Usui, S. et al. (1991). Internal color representation acquired by a five-layer neural network. In: Artificial Neural Network, Kohonen, T. et al. (eds.), 867–872.Google Scholar
  16. Van den Wollenberg, A. L. (1977). Redundancy analysis: An alternative for canonical analysis. Psychometrika, 42, 207–219.MATHCrossRefGoogle Scholar

Copyright information

© Springer Japan 1998

Authors and Affiliations

  • Yoshio Takane
    • 1
  1. 1.Department of PsychologyMcGill UniversityQuebecCanada

Personalised recommendations