Associative Reinforcement Training Using Probabilistic RAM Nets

  • Denise Gorse
Conference paper
Part of the Perspectives in Neural Computing book series (PERSPECT.NEURAL)


It is described how probabilistic RAMs may be applied to problems of associative search, using local reinforcement rules which utilise synaptic rather than threshold noise in the stochastic search procedure. Examples are given of syntactical and spatial learning tasks which successfully use these techniques.


Reinforcement Training Binary Output Eligibility Trace Reinforcement Rule Regular Grammar 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Gorse D and Taylor JG. An analysis of noisy RAM and neural nets. Physica 1989; D34:90–114MathSciNetGoogle Scholar
  2. 2.
    Aleksander I. The logic of connectionist systems. In: Aleksander I (ed) Neural Computing Architectures. MIT Press, 1989, pp 133–155Google Scholar
  3. 3.
    Clarkson TG, Gorse D and Taylor JG. Hardware realisable models of neural processing. In: Proceedings of the First IEE International Conference on Artificial Neural Networks, 1989, pp 310–314Google Scholar
  4. 4.
    Gorse D and Taylor JG. A general model of stochastic neural processing. Biol. Cybem. 1990; 63:299–306CrossRefMATHMathSciNetGoogle Scholar
  5. 5.
    Gorse D and Taylor JG. Universal associative stochastic learning automata. Neural Network World 1991; 1:193–202Google Scholar
  6. 6.
    Gorse D and Taylor JG. A continuous input RAM-based stochastic neural model. Neural Networks 1991; 4:657–665CrossRefGoogle Scholar
  7. 7.
    Clarkson TG, Gorse D and Taylor JG. From wetware to hardware: reverse engineering using probabilistic RAMs (to appear in Journal of Intelligent Systems)Google Scholar
  8. 8.
    Barto AG and Anandan P. Pattern recognising stochastic learning automata. IEEE Trans. Syst., Man, Cyb. 1985; SMC-15:360–375MathSciNetGoogle Scholar
  9. 9.
    Taylor JG. Spontaneous behaviour in neural networks. J. Theor. Biol. 1972; 36:513–528CrossRefGoogle Scholar
  10. 10.
    Bressloff PC and Taylor JG. Random iterative networks. Phys. Rev. 1990; A41:1126–1137MathSciNetGoogle Scholar
  11. 11.
    Amari, SI. Characteristics of random nets of analog neuron-like elements. IEEE Trans. Syst., Man, Cyb. 1972; SMC-2:643–657Google Scholar
  12. 12.
    Servan-Schreiber D, Cleeremans A and McClelland JL. Encoding sequential structure in simple recurrent networks (paper presented at IEEE Conference on Neural Information Processing Systems, Denver, Colorado, 1988)Google Scholar
  13. 13.
    Giles CL, Sun GZ, Chen HH, Lee YC and Chen D. Higher order recurrent networks and grammatical inference. In: Touretzky DS (ed) Advances in Neural Information Processing Systems, vol 2. Morgan Kauffman, San Mateo Ca., 1990, pp 380–387Google Scholar
  14. 14.
    Gorse D and Taylor JG. Learning sequential structure with recurrent pRAM nets. In: Proceedings of IJCNN Seattle, 1991, pp 37–42.Google Scholar
  15. 15.
    Barto AG, Sutton RS and Anderson CW. Neuronlike adaptive elements that can solve difficult learning control problems. IEEE Trans. Syst., Man, Cyb. 1983; SMC-13:834–846Google Scholar
  16. 16.
    Barto AG and Sutton RS. Landmark learning: an illustration of associative search. Biol. Cybern. 1981;42:1–8CrossRefMATHGoogle Scholar
  17. 17.
    Myers CE. Reinforcement training when results are delayed and interleaved in time. In: Proceedings of INNC-90-Paris, 1990, pp 860–863Google Scholar

Copyright information

© Springer-Verlag London Limited 1992

Authors and Affiliations

  • Denise Gorse
    • 1
  1. 1.Department of Computer ScienceUniversity CollegeLondonUK

Personalised recommendations