Advertisement

Neural Architectures and Algorithms

  • Alan Murray
Chapter

Abstract

Artificial Neural Networks (ANNs) are intelligent, thinking machines. They work in the same way as the human brain. They learn from experience in a way that no conventional computer can and they will shortly solve all of the world’s hard computational problems.

Keywords

Radial Basis Function Radial Basis Function Network Synaptic Weight Decision Boundary Hide Unit 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    J. Hertz, A. Krogh, and R.G. Palmer, in Introduction to the Theory of Neural Computation, Addison-Wesley, 1991.Google Scholar
  2. 2.
    R.R Lippmann, “An Introduction to Computing with Neural Nets”, IEEE ASSP Magazine, pp. 4-22, April, 1987.Google Scholar
  3. 3.
    R.R Lippmann, “Pattern Classification using Neural Networks”, IEEE Communications Magazine, pp. 47-64, November, 1989.Google Scholar
  4. 4.
    E. Sanchez-Sinencio and C. Lau, in Artificial Neural Networks: Paradigms, Applications and Hardware Implementations, IEEE Press, 1991.Google Scholar
  5. 5.
    M.L. Minsky and S.A. Papert, Perceptrons: An Introduction to Computational Geometry, MIT Press, Cambridge, MA, 1969.zbMATHGoogle Scholar
  6. 6.
    F. Rosenblatt, “The Perceptron: A Probabilistic Model for Information Storage and Organisation in the Brain”, Psychological Review, vol. 65, pp. 386–408, 1958.MathSciNetCrossRefGoogle Scholar
  7. 7.
    J.J. Hopfield, “Neural Networks and Physical Systems with Emergent Collective Computational Abilities”, Proc. Natl. Acad. Sci. USA, vol. 79, pp. 2554–2558, April, 1982.MathSciNetADSCrossRefGoogle Scholar
  8. 8.
    J.J. Hopfield, “Neural Networks and Physical Systems with Graded Response have Collective Properties like those of Two-State Neurons”, Proc. Natl. Acad. Sci. USA, vol. 81, pp. 3088–3092, May, 1984.ADSCrossRefGoogle Scholar
  9. 9.
    D.E. Rumelhart and J.D. McLelland, in Parallel Distributed Processing: Explorations in the Microstructures of Cognition Volume 1, MIT Press, 1986.Google Scholar
  10. 10.
    A.F. Murray, “Multi-Layer Perceptron Learning Optimised for On-Chip Implementation — a Noise-Robust System”, Neural Computation, vol. 4, no. 3, pp. 366–381, 1992.CrossRefGoogle Scholar
  11. 11.
    R. Rohwer, “The “Moving Targets” Training Algorithm”, Neural Information Processing Systems (NIPS) Conference, pp. 558-565, Morgan Kaufmann, 1990.Google Scholar
  12. 12.
    D. Nabutovsky, T. Grossman, and E. Domany, “Learning by CHIR without Storing Internal Representations”, Complex Systems, vol. 4, pp. 519–541, 1990.MathSciNetzbMATHGoogle Scholar
  13. 13.
    M. Jabri and B. Flower, “Weight Perturbation: An Optimal Architecture and Learning Technique for Analog VLSI Feedforward and Recurrent Multilayer Networks”, Neural Computation, vol. 3, pp. 546–565, 1991.CrossRefGoogle Scholar
  14. 14.
    Y. Le Cun, J.S. Denker, and S. Solla, “Optimal Brain Damage”, Neural Information Processing Systems (NIPS) Conference, pp. 598-605, Morgan Kaufmann, 1990.Google Scholar
  15. 15.
    D.S. Broomhead and D. Lowe, “Multivariate Functional Interpolation and Adaptice Networks”, Complex Systems, vol. 2, pp. 321–355, 1988.MathSciNetzbMATHGoogle Scholar
  16. 16.
    J. Moody and C. Darken, “Fast learning in Networks of Locally-Tuned Processing Units”, Neural Computation, vol. 1, pp. 281–294, 1989.CrossRefGoogle Scholar
  17. 17.
    C. Darken and J. Moody, “Note on Learning Rate Schedules for Stochastic Optimisation”, Neural Information Processing Systems (NIPS) Conference, pp. 832-838, Morgan Kaufmann, 1991.Google Scholar
  18. 18.
    T. Kohonen, Self-organisation and Associative Memory, Springer-Verlag, Berlin, 1984.Google Scholar
  19. 19.
    D.E. Rumelhart, G.E. Hinton, and R.J. Williams, “Learning Internal Representations by Error Propagation”, Parallel Distributed Processing: Explorations in the Microstructure of Cognition, vol. 1, pp. 318–362, 1986.Google Scholar
  20. 20.
    H.J. Bremermann and R.W. Anderson, “An alternative to back-propagation: A simple rule for synaptic modification for neural net training and memory”, Internal Report, Univ. of California, Berkeley., 1989.Google Scholar
  21. 21.
    PJ. Werbos, “A menu for designs of reinforcement learning over time”, in Neural networks for control, ed. W.T. Miller III, R.S. Sutton, and P. J. Werbos, MIT Press, Cambridge, MA., 1990.Google Scholar
  22. 22.
    A.G. Barto and P. Anandan, “Pattern Learning Stochastic Learning Automata”, IEEE Trans Systems, Man and Cybernetics, vol. 15, pp. 360–375, 1985.MathSciNetzbMATHCrossRefGoogle Scholar
  23. 23.
    T. Prescott and J. Mayhew, “Obstacle Avoidance through Reinforcement Learning”, Neural Information Processing Systems (NIPS) Conference, pp. 523-530, 1991.Google Scholar
  24. 24.
    R.S. Sutton, “Learning to predict by methods of temporal difference”, Machine Learning, vol. 3, pp. 9–44, 1988.Google Scholar
  25. 25.
    B. Widrow and M.A. Lehr, “30 years of adaptive Neural networks: Perceptron, Madaline, and backpropagation”, Proc. IEEE, vol. 78, pp. 1415–1442, 1990.CrossRefGoogle Scholar
  26. 26.
    G.E. Hinton, T.J. Sejnowski, and D.H. Ackley, “Bolzmann Machines: Constraint Satisfaction Networks that Learn”, Cognitive Science, vol. 9, pp. 147–169, May, 1984.Google Scholar
  27. 27.
    D.G. Bounds, “Numerical Simulation of Boltzman Machines”, in AIP Conference Proceedings 151, Neural Networks for Computing, Snowbird, ed. John S. Denker, pp. 59-64, American Institute of Physics, 1986.Google Scholar
  28. 28.
    F. Zou and J. Nossek, “A Chaotic Attractor with Cellular Neural Networks”, IEEE Trans CAS, vol. 38, 1991.Google Scholar
  29. 29.
    H.P. Graf, L.D. Jackel, R.E. Howard, B. Straughn, J.S. Denker, W. Hubbard, D.M. Tennant, and D. Schwartz, “VLSI Implementation of a Neural Network Memory with Several Hundreds of Neurons”, Proc. AIP Conference on Neural Networks for Computing, Snowbird, pp. 182-187, 1986.Google Scholar
  30. 30.
    C. Mead, in Analog VLSI and Neural Systems, Addison-Wesley, 1988.Google Scholar
  31. 31.
    A.G. Andreou, K.A. Boahen, P.O. Pouliquen, A. Pavasovic, R.E. Junkins, and K. Strohbehn, “Current-Mode Subthreshold MOS Circuits for Analog VLSI Neural Systems”, IEEE Transactions Neural Networks, vol. 2, no. 2, pp. 205–213, 1991.ADSCrossRefGoogle Scholar
  32. 32.
    L.A. Akers, M.R. Walker, D.K. Ferry, and R.O. Grondin, “A Limited-Interconnect, Highly Layered Synthetic Neural Architecture”, in VLSI for Artificial Intelligence, ed. J.G. Delgado-Frias and W.R. Moore, pp. 218-226, Kluwer, July 1988.Google Scholar
  33. 33.
    C.R. Schneider and H.C. Card, “Analog CMOS Contrastive Hebbian Networks”, Applications of Artificial Neural Networks III SPIE Proc, p. 1709, 1992.Google Scholar
  34. 34.
    A.F. Murray and A.V.W. Smith, “Asynchronous Arithmetic for VLSI Neural Systems”, Electronics Letters, vol. 23, no. 12, pp. 642–643, June, 1987.CrossRefGoogle Scholar
  35. 35.
    A.F. Murray, A. Hamilton, D.J. Baxter, S. Churcher, H.M. Reekie, and L. Tarassenko, “Integrated Pulse-Stream Neural Networks — Results, Issues and Pointers”, IEEE Trans. Neural Networks, pp. 385-393, 1992.Google Scholar
  36. 36.
    J. Meador, A. Wu, C. Cole, N. Nintunze, and P. Chintrakulchai, “Programmable Impulse Neural Circuits”, IEEE Transactions on Neural Networks, vol. 2, no. 1, pp. 101–109, 1990.CrossRefGoogle Scholar
  37. 37.
    L.M. Reyneri, F. Gregoreti, and C. Truzzi, “Interfacing Sensors and Actuators to CPWM and CPEM Neural Networks”, Proc. International Conference on Microelectronics for Neural Networks, Edinburgh, pp. 11-20, 1993.Google Scholar
  38. 38.
    J. Tomberg and K. Kaski, “Some IC Implementations of Artificial Neural Networks Using Synchronous Pulse-Density Modulation Technique”, Int. Journal of Neural Systems, vol. 2, no. 1/2, pp. 101–114, 1991.CrossRefGoogle Scholar
  39. 39.
    D. Hammerstrom, “A Highly Parallel Digital Architecture for Neural Network Emulation”, VLSI for AI and Neural Networks, pp. 357-366, Plenum, 1991.Google Scholar
  40. 40.
    U. Ramacher, W. Raab, J. Anlauf, U. Hachmann, J. Beichter, N. Bruls, R. Manner, J. Glas, and A. Wurz, “Multiprocessor and Memory Architecture of the Neurocomputer SYNAPSE-1”, Proc. International Conference on Microelectronics for Neural Networks, Edinburgh, pp. 227-232, 1993.Google Scholar
  41. 41.
    R.M. Goodman, P. Smyth, C.M. Higgins, and J.W. Miller, “Rule-Based Neural Networks for Classification and Probability Estimation”, Neural Computation, vol. 4, no. 6, pp. 781–804, 1992.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 1995

Authors and Affiliations

  • Alan Murray
    • 1
  1. 1.Department of Electrical EngineeringUniversity of EdinburghEdinburghUK

Personalised recommendations