Skip to main content

Learning in the AMS Context

  • Chapter
  • First Online:
Artificial Mind System - Kernel Memory Approach

Part of the book series: Studies in Computational Intelligence ((SCI,volume 1))

  • 311 Accesses

Abstract

In this chapter, we dig further into the notion of “learning” within the AMS context. In conventional connectionist models, the term “learning” is almost always referred to as merely establishing the input-output relations via the parametric changes within such models, and the parameter tuning is typically performed by a certain iterative algorithm, given a finite (and mostly static) set of variables (i.e. both the training patterns and target signals). However, this interpretation is rather microscopic and hence still quite distant from the general notion of learning, since it only ends up with such parameter tuning, without giving any clear notions or clues to describe it at a macroscopic level, e.g. to explain the higher-order functions/phenomena occurring within the brain (see e.g. Roy, 2000).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this chapter

Cite this chapter

Hoya, T. Learning in the AMS Context. In: Artificial Mind System - Kernel Memory Approach. Studies in Computational Intelligence, vol 1. Springer, Berlin, Heidelberg. https://doi.org/10.1007/10997444_7

Download citation

  • DOI: https://doi.org/10.1007/10997444_7

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-26072-1

  • Online ISBN: 978-3-540-32403-4

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics