Advertisement

Neural Approximation: A Control Perspective

  • Rafał Żbikowski
  • Andrzej Dzieliński
Part of the Advances in Industrial Control book series (AIC)

Abstract

This chapter discusses theoretical foundations of modelling of nonlinear control systems with neural networks. Both feedforward and recurrent networks are described with emphasis on the practical implications of the mathematical results. The major approaches based on approximation and interpolation theories are presented: Stone-Weierstrass’ theorem, Kolmogorov’s theorem and multidimensional sampling. These are compared within a unified framework and the relevance for neural modelling of nonlinear control systems is stressed. Also, approximation of functionals with feedforward networks is briefly explained. Finally, approximation of dynamical systems with recurrent networks is described with emphasis on the concept of differential approximation.

Keywords

Neural Network Recurrent Neural Network Neural Modelling Feedforward Network Recurrent Network 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    V. I. Arnol’d. Some questions of approximation and representation of functions. In Proceedings of the International Congress of Mathematicians, pages 339–348, 1958. (English translation: American Mathematical Society Translations, Vol. 53).Google Scholar
  2. 2.
    J. L. Brown. Sampling reconstruction of n-dimensional bandlimited images after multilinear filtering. IEEE Trans. Circuits & Systems, CAS-36:1035–1038, 1989.CrossRefGoogle Scholar
  3. 3.
    J. C. Burkill and H. Burkill. A Second Course in Mathematical Analysis. Cambridge University Press, Cambridge, England, 1970.zbMATHGoogle Scholar
  4. 4.
    T. Chen and H. Chen. Approximation of continuous functionals by neural networks with application to dynamic systems. IEEE Transactions on Neural Networks, 4:910–918, 1993.CrossRefGoogle Scholar
  5. 5.
    K. F. Cheung. A multidimensional extension of Papoulis’ Generalised Sampling Expansion with the application in minimum density sampling. In Robert J. Marks II, editor, Advanced Topics in Shannon Sampling and Interpolation Theory. Springer Verlag, New York, 1993.Google Scholar
  6. 6.
    G. Cybenko. Approximation by superposition of a sigmoidal function. Mathematics of Control, Signals, and Systems, 2:303–314, 1989.MathSciNetzbMATHCrossRefGoogle Scholar
  7. 7.
    J. Dugundji. Topology. Allyn and Bacon, Boston, 1966.zbMATHGoogle Scholar
  8. 8.
    A. Dzieliński. Optimal Filtering and Control of Two-dimensional Linear Discrete-Time Systems. Ph.D. Thesis, Faculty of Electrical Engineering, Warsaw University of Technology, Warsaw, Poland, February 1992.Google Scholar
  9. 9.
    H. G. Feichtinger and K. Gröchenig. Theory and practice of irregular sampling. In J. Benedetto and M. Frazier, editors, Wavelets: Mathematics and Applications. CRC Press, 1993.Google Scholar
  10. 10.
    R. Fletcher. Practical Methods of Optimization. Second Edition. Wiley, Chichester, 1987.Google Scholar
  11. 11.
    K. Funahashi. On the approximate realization of continuous mappings by neural networks. Neural Networks, 2:183–192, 1989.CrossRefGoogle Scholar
  12. 12.
    K. Funahashi and Y. Nakamura. Approximation of dynamical systems by continuous time recurrent neural networks. Neural Networks, 6:801–806, 1993.CrossRefGoogle Scholar
  13. 13.
    I. M. Gelfand and S. V. Fomin. Calculus of Variations. Prentice-Hall, Englewood Cliffs, N.J., 1963.Google Scholar
  14. 14.
    F. Girosi and T. Poggio. Representation properties of networks: Kolmogorov’s theorem is irrelevant. Neural Computation, 1:465–469, 1989.CrossRefGoogle Scholar
  15. 15.
    K. Gröchenig. Reconstruction algorithms in irregular sampling. Mathematics of Computations, 59:181–194, 1992.zbMATHCrossRefGoogle Scholar
  16. 16.
    R. Haber and H. Unbehauen. Structure identification of nonlinear dynamic systems—a survey on input/output approaches. Automatica, 26:651–677, 1990.MathSciNetzbMATHCrossRefGoogle Scholar
  17. 17.
    R. Hecht-Nielsen. Kolmogorov’s mapping neural network existence theorem. In Proceedings of the International Joint Conference on Neural Networks, volume 3, pages 11–14, New York, 1987. IEEE Press.Google Scholar
  18. 18.
    K. Hornik, M. Stinchcombe, and H. White. Multilayer feedforward networks are universal approximators. Neural Networks, 2:359–366, 1989.CrossRefGoogle Scholar
  19. 19.
    K. J. Hunt, D. Sbarbaro, R. Żbikowski, and P. J. Gawthrop. Neural networks for control systems: A survey. Automatica, 28(6):1083–1112, November 1992.MathSciNetzbMATHCrossRefGoogle Scholar
  20. 20.
    A. Isidori. Nonlinear Control Systems: An Introduction. Springer-Verlag, New York, Second edition, 1989.Google Scholar
  21. 21.
    T. Kaczorek. Two-Dimensional Linear Systems. Springer-Verlag, Berlin, 1985.zbMATHGoogle Scholar
  22. 22.
    A. N. Kolmogorov. On the representation of continuous functions of several variables by superpositions of continuous functions of a smaller number of variables. Dokl. Akad. Nauk SSSR, 108:179–182, 1956. (in Russian).MathSciNetzbMATHGoogle Scholar
  23. 23.
    A. N. Kolmogorov. On the representation of continuous functions of many variables by superposition of continuous functions of one variable and addition. Dokl. Akad. Nauk SSSR, 114:953–956, 1957. (English translation: American Mathematical Society Translations, Vol. 28).MathSciNetzbMATHGoogle Scholar
  24. 24.
    J. S. Kowalik and M. R. Osborne. Methods for Unconstrained Optimization Problems. Elsevier, New York, 1968.zbMATHGoogle Scholar
  25. 25.
    V. Kurková. Kolmogorov’s theorem is relevant. Neural Computation, 3:617–622, 1991.Google Scholar
  26. 26.
    V. Kurková. Kolmogorov’s theorem and multilayer neural networks. Neural Networks, 5:501–506, 1992.CrossRefGoogle Scholar
  27. 27.
    J. N. Lin and R. Unbehauen. On the realization of Kolmogorov’s network. Neural Computation, 5:18–20, 1993.CrossRefGoogle Scholar
  28. 28.
    S. Lipschutz. General Topology. McGraw-Hill, New York, 1965.zbMATHGoogle Scholar
  29. 29.
    G. G. Lorentz. Approximation of Functions. Holt, Reinhart and Winston, New York, 1966.Google Scholar
  30. 30.
    F. Marvasti. Nonuniform sampling. In Robert J. Marks II, editor, Advanced Topics in Shannon Sampling and Interpolation Theory. Springer Verlag, New York, 1993.Google Scholar
  31. 31.
    D. P. Petersen and D. Middleton. Sampling and reconstruction of wave-number-limited functions in n-dimensional euclidean spaces. Information and Control, 5:279–323, 1962.MathSciNetCrossRefGoogle Scholar
  32. 32.
    W. Rudin. Principles of Mathematical Analysis, Third Edition. McGraw-Hill, Auckland, 1976.Google Scholar
  33. 33.
    I. W. Sandberg. Approximation theorems for discrete-time systems. IEEE Transactions on Circuits and Systems, 38:564–566, 1991.CrossRefGoogle Scholar
  34. 34.
    R. M. Sanner and J.-J. E. Slotine. Gaussian networks for direct adaptive control. IEEE Transactions on Neural Networks, 3:837–863, 1992.CrossRefGoogle Scholar
  35. 35.
    D. Sbarbaro. Connectionist Feedforward Networks for Control of Nonlinear Systems. Ph.D. Thesis, Department of Mechanical Engineering, Glasgow University, Glasgow, Scotland, October 1992.Google Scholar
  36. 36.
    C. E. Shannon. Communication in the presence of noise. Proceedings of the IRE, 37:10–21, 1949.MathSciNetCrossRefGoogle Scholar
  37. 37.
    I. N. Sneddon. Fourier Transforms. McGraw-Hill, New York, 1951.Google Scholar
  38. 38.
    E. Sontag. Neural nets as systems models and controllers. In Proc. Seventh Yale Workshop on Adaptive and Learning Systems, pages 73–79. Yale University, 1992.Google Scholar
  39. 39.
    D. A. Sprecher. On the structure of continuous functions of several variables. Transactions of the American Mathematical Society, 115:340–355, 1965.MathSciNetzbMATHCrossRefGoogle Scholar
  40. 40.
    M. H. Stone. The generalized Weierstrass approximation theorem. Mathematics Magazine, 21:167–184, 237–254, 1948.CrossRefGoogle Scholar
  41. 41.
    E. Tzirkel-Hancock. Stable Control of Nonlinear Systems Using Neural Networks. Ph.D. thesis, Trinity College, Cambridge University, Cambridge, England, August 1992.Google Scholar
  42. 42.
    A. G. Vitushkin. On Hilbert’s thirteenth problem. Dokl. Akad. Nauk SSSR, 95:701–704, 1954. (in Russian).zbMATHGoogle Scholar
  43. 43.
    R. Żbikowski. The problem of generic nonlinear control. In Proc. IEEE/SMC International Conference on Systems, Man and Cybernetics, Le Touquet, France, volume 4, pages 74–79, 1993.Google Scholar
  44. 44.
    R. Żbikowski. Recurrent Neural Networks: Some Control Problems. Ph.D. Thesis, Department of Mechanical Engineering, Glasgow University, Glasgow, Scotland, May 1994.Google Scholar
  45. 45.
    R. Żbikowski, K. J. Hunt, A. Dzieliński, R. Murray-Smith, and P. J. Gawthrop. A review of advances in neural adaptive control systems. Technical Report of the ESPRIT NACT Project TP-1, Glasgow University and Daimler-Benz Research, 1994.Google Scholar

Copyright information

© Springer-Verlag London Limited 1995

Authors and Affiliations

  • Rafał Żbikowski
    • 1
  • Andrzej Dzieliński
    • 1
  1. 1.Control Group, Department of Mechanical Engineering, James Watt BuildingGlasgow UniversityGlasgowScotland, UK

Personalised recommendations