Skip to main content

A Unified Modeling of Neural Networks Architectures

  • Conference paper
Sensor-Based Robots: Algorithms and Architectures

Part of the book series: NATO ASI Series ((NATO ASI F,volume 66))

  • 108 Accesses

Abstract

Although neural networks can ultimately be used for many applications, their suitability for a specific application depends on the acquisition/representation, performance vs. training data, response time, classification accuracy, fault tolerance, generality, adaptability, computational efficiency, size and power requirement. In order to deal with such a multiple-spectrum consideration, there is a need of unified examination of the theoretical foundations of neural network modeling. This can lead to more effective simulation and implementation tools. For this purpose, the paper proposes a unified modeling formulation for a wide variety of artificial neural networks (ANNs): single layer feedback networks, competitive learning networks, multilayer feed-forward networks, as well as some probabilistic models. The existing connectionist neural networks are parameterized by nonlinear activation function, weight measure function, weight updating formula, back-propagation, and iteration index (for retrieving phase) and recursion index (for learning phase). Based on the formulation, new models may be derived and one such example is discussed in the paper. The formulation also leads to a basic structure for a universal simulation tool and neurocomputer architecture.

This research was supported in part by the National Science Foundation under Grant MIP-87-14689, and by the Innovative Science and Technology Office of the Strategic Defense Initiative Organization, administered through the Office of Naval Research under Contract No. N00014-85-K-0469 and N00014-85-K-0599.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. D. H. Ackley, G. E. Hinton, and T. J. Sejnowski. A learning algorithm for Boltzmann machines. Cognitive Science, 9:147–169, 1985.

    Article  Google Scholar 

  2. H.M. Ahmed. Alternative arithmetic unit architectures for VLSI digital signal processors. In VLSI and Modern Signal Processing, chapter 16, pages 277–306. Prentice Hall, Inc., Englewood Cliffs, NJ, 1985.

    Google Scholar 

  3. J. A. Anderson. Neurocomputing — Paper Collections MIT Press, 1988.

    Google Scholar 

  4. L. E. Baum and G. R. Sell. Growth transformations for functions on manifolds. Pacific Journal of Mathematics, 27(2):211–227, 1968.

    MathSciNet  MATH  Google Scholar 

  5. L.E. Baum and J. A. Eagon. An inequality with applications to statistical estimation for probabilistic function of Markov processes and a model for ecology. Amer. Math. Soc. Bull, 73:360–363, May 1967.

    Article  MathSciNet  MATH  Google Scholar 

  6. L.E. Baum, T. Petrie, G. Soules, and N. Weiss. A maximization technique occurring in the statistical analysis of+ probabilistic functions of Markov chains. Ann. Math. Statistic, 41:164–171, 1970.

    Article  MathSciNet  MATH  Google Scholar 

  7. G. A. Carpenter and S. Grossberg. ART2: Self-organization of stable category recognition codes for analog input patterns. In Proc. IEEE ICNN’87, San Diego, pages II 727- II 736, 1987.

    Google Scholar 

  8. Gail A. Carpenter and Stephen Grossberg. ART2: Self-organized of stable category recognition codes for analog input patterns. Applied Optics, 26(23):4919–4930, December 1987.

    Article  Google Scholar 

  9. K. Fukushima. Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biological Cybernetics, 36:193–202, April 1980.

    Google Scholar 

  10. S. Grossberg. Adaptive pattern classification and universal recoding: Part 1. parallel development and coding of neural feature detectors. Biological Cybernetics, 23:121–134, 1976.

    Article  MathSciNet  MATH  Google Scholar 

  11. D. O. Hebb. The Organization of Behavior Wiley Inc., New York, 1949.

    Google Scholar 

  12. G. E. Hinton. Connectionist learning procedure. Technical Report CMU-CS-87–115, Carnegie Mellon University, September 1987.

    Google Scholar 

  13. J. J. Hopfield. Neural network and physical systems with emergent collective computational abilities. In Proc. Nat’l. Acad. Sci. USA, volume 79, pages 2554–2558, 1982.

    Google Scholar 

  14. J. N. Hwang. Algorithms/Applications/Architectures of Artificial Neural Nets PhD thesis, Dept. of Electrical Engineering, University of Southern California, December 1988.

    Google Scholar 

  15. J. N. Hwang and S. Y. Kung. A unifying viewpoint between multilayer perceptrons and hidden Markov models. In IEEE Int’l Symposium on Circuits and Systems, ISCAS’89, Portland, pages 770–773, May 1989.

    Google Scholar 

  16. B. H. Juang. On the hidden Markov model and dynamic time warping for speech recogintion — a unified view. AT&T Bell Laboratories Technical Journal, 63(7):1213–1243, September 1984.

    MathSciNet  MATH  Google Scholar 

  17. T. Kohonen. Self-organized formation of topologically correct feature map. Biological Cybernetics, 43:59–69, 1982.

    Article  MathSciNet  MATH  Google Scholar 

  18. T. Kohonen. Self-Organization and Associative Memory, Series in Information Science, Vol. 8. Springer-Verlag, New York, 1984.

    Google Scholar 

  19. S. Y. Kung. VLSI Array Processors Prentice Hall Inc., N.J., 1988.

    Google Scholar 

  20. S. Y. Kung and J. N. Hwang. An algebraic projection analysis for optimal hidden units size and learning rate in back-propagation learning. In IEEE, Int’l Conf on Neural Networks, ICNN’88, San Diego, pages Vol.1: 363–370, July 1988. (Also accepted for publication in Neural Networks.)

    Chapter  Google Scholar 

  21. S.Y. Kung and J. N. Hwang. A unified systolic architecture for artificial neural networks. Journal of Parallel and Distributed Computing, Special Issue on Neural Networks, 6(2):358–387, April 1989.

    Google Scholar 

  22. S. E. Levinson, L. R. Rabiner, and M. M. Sondhi. An introduction to the application of the theory of probabilistic functions of a Markov process to automatic speech recognition. The Bell System Technical Journal, 62:1035–1074, April 1983.

    MathSciNet  MATH  Google Scholar 

  23. R. Linsker. Self-organization in a perceptual network. IEEE Computer Magazine, 21:105 – 117, March 1988.

    Google Scholar 

  24. J. L. McClelland and D. E. Rumelhart. Distributed memory and the representation of general and specific information. Journal of Experimental Psychology: General, 114:158–188, 1985.

    Article  Google Scholar 

  25. C. Mead and L. Conway. Introduction to VLSI Systems Addison-Wesley, 1980.

    Google Scholar 

  26. N. J. Nilsson. Learning Machines McGraw-Hill Book Company, 1965.

    MATH  Google Scholar 

  27. L. R. Rabiner. A tutorial on hidden Markov models ans selected applications in speech recognition. Proceedings IEEE, 77(2):257–286, February 1989.

    Article  Google Scholar 

  28. L. R. Rabiner and B. H. Juang. An introduction to hidden Markov models. IEEE ASSP Magazine, 3(1):4–16, January 1986.

    Article  Google Scholar 

  29. D. E. Rumelhart, G. E. Hinton, and R. J. Williams. Learning internal representations by error propagation. Parallel Distributed Processing (PDP): Exploration in the Microstructure of Cognition (Vol. 1), chapter 8, pages 318–362. MIT Press, Cambridge, Massachusetts, 1986.

    Google Scholar 

  30. D. E. Rumelhart and D. Zipser. Feature discovery by competitive learning. Cognitive Science, 9:75–112, 1985.

    Article  Google Scholar 

  31. P. F. Stebe. Invariant functions of an iterative process for maximization of a polynomial. Pacific Journal of Mathematics, 43(3):765–783, 1972.

    MathSciNet  MATH  Google Scholar 

  32. B. Widrow and R. Winter. Neural nets for adaptive filtering and adaptive pattern recognition. IEEE Computer Magazine, 21:25–39, March 1988.

    Google Scholar 

  33. G. Widrow and M. E. Hoff. Adaptive switching circuit. IRE Western Electronic Show and Convention: Convention Record, pages 96–104, 1960.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1991 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Rung, S.Y., Hwang, J.N. (1991). A Unified Modeling of Neural Networks Architectures. In: Lee, C.S.G. (eds) Sensor-Based Robots: Algorithms and Architectures. NATO ASI Series, vol 66. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-75530-9_8

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-75530-9_8

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-75532-3

  • Online ISBN: 978-3-642-75530-9

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics