Skip to main content

Incremental Approximation by Neural Networks

  • Chapter
Dealing with Complexity

Abstract

An important task in practical applications of neural networks is to design a network architecture. Network parameters are usually determined for a fixed architecture which requires us to solve a non-linear optimization problem in a multidimensional parameter space. An alternative approach is to use a dynamically allocated architecture and determine the final set of network parameters in a series of steps, each taking place in a lower dimensional space. There have been considered various types of such architecture dynamics in which network units or connections are either added or deleted. The simplest type is incremental architecture where in each step an architecture is extended by adding one new unit.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. A. R. Barron. Neural net approximation. In Proceedings of the 7th Yale Workshop on Adaptive and Learning Systems (pp. 69–72 ), 1992.

    Google Scholar 

  2. A. R. Barron. Universal approximation bounds for superposition of a sigmoidal function. IEEE Transactions on Information Theory 39, 930–945, 1993.

    Article  MATH  MathSciNet  Google Scholar 

  3. B. Beliczynski. An almost analytical design of incremental discrete functions approximation by one-hidden-layer neural networks. In Proceedings of WCNN’96 (pp. 988–991 ). Lawrence Erlbaum, San Diego, 1996.

    Google Scholar 

  4. R. Courant and D. Hilbert. Methods of Mathematical Physics. Wiley, New York, 1989.

    Book  Google Scholar 

  5. C. Darken, M. Donahue, L. Gurvits, and E. Sontag. Rate of approximation results motivated by robust neural network learning. In Proceedings of the 6th Annual ACM Conference on Computational Learning Theory (pp. 303–309 ). ACM, New York, 1993.

    Chapter  Google Scholar 

  6. R. Devore, R. Howard, and C. Micchelli. Optimal nonlinear approximation. Manuscripta Mathematica 63, 469–478, 1989.

    Article  MathSciNet  Google Scholar 

  7. S. E. Fahlman and C. Lebiere. The cascade correlation learning architecture. Technical Report CMU-CS-90–100, 1991.

    Google Scholar 

  8. A. Friedman. Foundations of Modern Analysis. Dover, New York, 1982.

    MATH  Google Scholar 

  9. B. Fritzke. Fast learning with incremental RBF networks. Neural Processing Letters 1, 2–5, 1994.

    Article  Google Scholar 

  10. F. Girosi. Approximation error bounds that use VC-bounds. In Proceedings of ICANN’95 (pp. 295–302). EC2 & Cie, Paris, 1995.

    Google Scholar 

  11. K. Hlavâckovâ, V. Knrkovâ, and P. Savickÿ. Representations and rates of approximation of real-valued Boolean functions by neural networks (manuscript).

    Google Scholar 

  12. Y. Ito. Finite mapping by neural networks and truth functions. Mathematical Scientist 17, 69–77, 1992.

    MATH  MathSciNet  Google Scholar 

  13. L. K. Jones. A simple lemma on greedy approximation in Hilbert space and convergence rates for projection pursuit regression and neural network training. Annals of Statistics 20, 608–613, 1992.

    Article  MATH  MathSciNet  Google Scholar 

  14. V. Kůrkovâ. Dimension-independent rates of approximation by neural networks. In Computer-Intensive Methods in Control and Signal Processing: Curse of Dimensionality (Eds. M. Kdrnÿ, K. Warwick) (pp. 261–270 ). Birkhauser, Boston, 1997.

    Chapter  Google Scholar 

  15. V. Kůrkovâ, P. C. Kainen, and V. Kreinovich. Estimates of the number of hidden units and variation with respect to half-spaces. Neural Networks,1997 (in press).

    Google Scholar 

  16. H. N. Mhaskar and C. A. Micchelli. Approximation by superposition of sigmoidal and radial basis functions. Advances in Applied Mathematics 13, 350–373, 1992.

    Article  MATH  MathSciNet  Google Scholar 

  17. C. A. Micchelli. Interpolation of scattered data: distance matrices and conditionally positive definite functions. Constructive approximation 2, 11–22, 1986.

    Article  MATH  MathSciNet  Google Scholar 

  18. J. Park and I. W. Sandberg. Approximation and radial-basis-function networks. Neural Computation 5, 305–316, 1993.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1998 Springer-Verlag London Limited

About this chapter

Cite this chapter

Kárný, M., Warwick, K., Kůrková, V. (1998). Incremental Approximation by Neural Networks. In: Kárný, M., Warwick, K., Kůrková, V. (eds) Dealing with Complexity. Perspectives in Neural Computing. Springer, London. https://doi.org/10.1007/978-1-4471-1523-6_12

Download citation

  • DOI: https://doi.org/10.1007/978-1-4471-1523-6_12

  • Publisher Name: Springer, London

  • Print ISBN: 978-3-540-76160-0

  • Online ISBN: 978-1-4471-1523-6

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics