A New Approach to Learning Via Self-Organization
Recently, we have introduced a simple “toy” brain model to address the problem of learning in the absence of external intelligence.1 Our model departs from the traditional gradient-descent based approaches to learning by operating at a highly susceptible “critical” state with low activity and sparse connections between firing neurons. Here, quantitative studies of the performance of our model in a simple association task show that tuning our system close to this critical state results in dramatic gains in performance.
KeywordsSynaptic Weight Association Task Reinforcement Learning Model Input Site Output Site
Unable to display preview. Download preview PDF.
- J. Hertz, A. Krogh, and R. G. Palmer, Introduction to the Theory of Neural Computation ( Addison-Wesley, Redwood, 1991 ).Google Scholar
- Based on the partial information provided by the critic a target pattern is determined and the output-weight errors computed. The rest of the weights can then be updated by back-propagating this error-signal through the network (see Hertz et al in Ref. 2).Google Scholar
- D. Stassinopoulos and P. Bak, in Proceedings of the Fourth Appalachian Conference on Behavioral Neurodynamics - Radford, edited by K. Pribram, ( Lawrence Erlbaum. New Jersey, 1996 ).Google Scholar