Dataflow Learning in Coupled Lattices: An Application to Artificial Neural Networks

  • Jose C. Principe
  • Curt Lefebvre
  • Craig L. Fancourt
Part of the Nonconvex Optimization and Its Applications book series (NOIA, volume 62)


This chapter describes artificial neural networks (ANNs) as coupled lattices of dynamic nonlinear processing elements and studies ways to adapt their parameters. This view extends the conventional paradigm of static neural networks and shines light on the principles of parameter optimization, both for the static and dynamic cases. We show how gradient descent learning can be naturally implemented with local rules in coupled lattices. We review the present state of the art in neural network training and present recent results that take advantage of the distributed nature of coupled lattices for optimization.


Neural Network Processing Element Recurrent Neural Network Neural Model Recurrent Network 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. Almeida, L.B. (1987). A learning rule for asynchronous perceptrons with feedback in a combinatorial environment. In 1st IEEE Int. Conf. on Neural Networks, vol. 2, pages 609–618.Google Scholar
  2. Amari, S. (1972). Characteristics of random nets of analog neuron-like elements. IEEE Transactions on Systems, Man, and Cybernetics, SMC2(5): 643–657.Google Scholar
  3. Baldi, P. (1995). Gradient descent learning algorithms: A unified approach. In Chauvin and Rumelhart, editors, Backpropagation: theory, architectures and applications, pages 509–541. Lawrence Erlbaum Associates, New Jersey.Google Scholar
  4. Birkhoff, G. (1967). Lattice Theory. American Mathematical Society.Google Scholar
  5. Bryson, A. and Ho, Y. (1975). Applied Optimal Control, Optimization, estimation and control. Hemispheric Publishing Co., New York.Google Scholar
  6. deVries, B. and Principe, J. (1992). The gamma model–a new neural model for temporal processing. Neural Networks, 5 (4): 565–576.CrossRefGoogle Scholar
  7. Grossberg, S. and Cohen, M. (1983). Absolute stability of global pattern formation and parallel memory storage by competitive neural networks. IEEE Transactions on Systems, Man, and Cybernetics, SMC13:815–826.Google Scholar
  8. Hopfield, J. (1982). Neural networks and physical systems with emergent collective computational abilities. Proceedings of the National Academy of Sciences of the USA, 79(2554–2558).Google Scholar
  9. Jordan, M. (1986). Attractor dynamics and parallelism in a connectionist sequential machine. In Proc. 8th annual Conf. on Cognition Science Society, pages 531–546.Google Scholar
  10. Lang, K., Waibel, A., and Hinton, G. (1990). A time dealy neural network architecture for isolated word recognition. Neural Networks, 3 (1): 23–44.CrossRefGoogle Scholar
  11. LeCun, Y. (1988). A theoretical framework for backpropagation. Technical Report CRG-TR-88–6, Department of Computer Science, University of Toronto, Toronto, Canada.Google Scholar
  12. Lefebvre, C. (1992). An object-oriented methodology for the analysis of artificial neural networks. Master’s thesis, University of Florida, Gainesville, Florida.Google Scholar
  13. Lippman, R. (1987). An introduction to computing with neural nets. IEEE Trans. ASSP Magazine, 4: 4–22.CrossRefGoogle Scholar
  14. Pineda, F. (1987). Generalization of backpropagation to recurrent neural networks. Physical Rev. Let., 59 (19): 2229–2232.MathSciNetCrossRefGoogle Scholar
  15. Pontryagin, L.S. (1962). The mathemetical theory of optimal processes. Interscience, New York.Google Scholar
  16. Principe, J., Euliano, N., and Lefebvre, C. (2000). Neural and Adaptive systems: Fundamentals through simulation. Wiley, New York, New York.Google Scholar
  17. Rumelhart, D., Hinton, G., and Williams, R. (1986). Learning internal representations by error propagation. In Rumelhart and McClelland, editors, Parallel Distributed Processing. MIT Press.Google Scholar
  18. Werbos, P. (1974). Beyond regression: new tools for prediction and analysis in the behavioral sciences. PhD thesis, Harvard University.Google Scholar
  19. Werbos, P. (1990). Backpropagation through time: what it does and how to do it. Proc. IEEE, 78 (10).Google Scholar
  20. Widrow, B. and Hoff, M. (1960). Adaptive switching circuits. IRE Wescon rep pt 4.Google Scholar
  21. Williams, R. and Zipser, D. (1989). A learning algorithm for continually running fully recurrent neural networks. Neural Computation, 1: 270–280.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media Dordrecht 2002

Authors and Affiliations

  • Jose C. Principe
    • 1
  • Curt Lefebvre
    • 2
  • Craig L. Fancourt
    • 3
  1. 1.Computational NeuroEngineering Laboratory Electrical and Computer Engineering DepartmentUniversity of FloridaGainesvilleUSA
  2. 2.John Hancock TowerNeuco, Inc.BostonUSA
  3. 3.Adaptive Image & Signal Processing GroupSarnoff CorporationUSA

Personalised recommendations