Skip to main content

From Classical Connectionist Models to Probabilistic/Generalised Regression Neural Networks (PNNs/GRNNs)

  • Chapter
  • First Online:
Artificial Mind System - Kernel Memory Approach

Part of the book series: Studies in Computational Intelligence ((SCI,volume 1))

  • 344 Accesses

Abstract

This chapter begins by briefly summarising some of the well-known classical connectionist/artificial neural network models such as multi-layered perceptron neural networks (MLP-NNs), radial basis function neural networks (RBF-NNs), self-organising feature maps (SOFMs), associative memory, and Hopfield-type recurrent neural networks (HRNNs). These models are shown to normally require iterative and/or complex parameter approximation procedures, and it is highlighted why these approaches have in general lost interest in modelling the psychological functions and developing artificial intelligence (in a more realistic sense).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this chapter

Cite this chapter

Hoya, T. From Classical Connectionist Models to Probabilistic/Generalised Regression Neural Networks (PNNs/GRNNs) . In: Artificial Mind System - Kernel Memory Approach. Studies in Computational Intelligence, vol 1. Springer, Berlin, Heidelberg. https://doi.org/10.1007/10997444_2

Download citation

  • DOI: https://doi.org/10.1007/10997444_2

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-26072-1

  • Online ISBN: 978-3-540-32403-4

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics