Advertisement

Optimierung der Konvergenzgeschwindigkeit von Backpropagation

  • R. Linder
  • S. J. Pöppl
Part of the Informatik aktuell book series (INFORMAT)

Zusammenfassung

Für Backpropagation-Netzwerke wird eine neue Methode zur neuronenspezifischen Anpassung der Lernrate vorgestellt, welche eine deutliche Beschleunigung der Konvergenz insbesondere für große und vielschichtige Netzwerke erlaubt. Gezeigt wird dies am Zwei-Spiralen-Benchmark sowie an zwei Real-world-Problemen aus Medizintechnik und Physik.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Literatur

  1. [1]
    Alpsan D, Towsey M, Ozdamar O, Tsoi AC, Ghista DN: Efficacy of Modified Backpropagation and Optimisation Methods on a Real-world Problem. Neural Networks 8 (6): 945–962, 1995.CrossRefGoogle Scholar
  2. [2]
    Barnard E, Holm JEW: A comparative study of optimization techniques for backpropagation. Neurocomputing 6: 19–30, 1994.CrossRefGoogle Scholar
  3. [3]
    Chen JR, Mars P: Stepsize variation methods for accelerating the back propagation algorithm. Proceedings of the International Joint Conference on Neural Networks, Washington, DC, Vol. 1, 601–604, 1990.Google Scholar
  4. [4]
    Chen K, Yang L, Yu X, Chi H: A self-generating modular neural network architecture for supervised learning. Neurocomputing 16: 33–48, 1997.CrossRefGoogle Scholar
  5. [5]
    Fahlmann SE: Faster-Learning Variations on Back-Propagation: An Empirical Study. In: Touretzky D, Hinton GE, Sejnowski T (Hrsg.): Proc. Of the 1988 Connectionist Models Summer School, Morgan Kaufmann, 38–51, 1989.Google Scholar
  6. [6]
    Fahlman SE, Lebiere Ch: The cascade-correlation learning architecture. In Touretzky DS (Hrsg.): Advances in Neural Information Processing Systems 2, Morgan Kaufmann, 524–532, 1990.Google Scholar
  7. [7]
    Higashino J, de Greef BL, Persoon EH: Numerical analysis and adaptation method for learning rate of back propagation. Proceedings of the International Joint Conference on Neural Networks, Washington, DC, Vol. 1, 627–630, 1990.Google Scholar
  8. [8]
    IBM: Reference manual: Random number generation and testing. C20–8011, 1959.Google Scholar
  9. [9]
    Jacobs R: Increased Rates of Convergence Through Learning Rate Adaption. Neural Networks 1, 295–307, 1988.CrossRefGoogle Scholar
  10. [10]
    Karayiannis NB: Accelerating the Training of Feedforward Neural Networks Using Generalized Hebbian Rules for Initializing the Internal Representations. IEEE Trans. Neural Networks 7 (2): 419–426, 1996.CrossRefGoogle Scholar
  11. [11]
    Kim HB, Jung SH, Kim TG, Park KH: Fast learning method for back- propagation neural network by evolutionary adaptation of learning rates. Neurocomputing 11: 101–106, 1996.zbMATHCrossRefGoogle Scholar
  12. [12]
    Kuncheva LI: Initializing of an RBF network by a genetic algorithm. Neurocomputing 14: 273–288, 1997.CrossRefGoogle Scholar
  13. [13]
    Lang KJ, Witbrock MJ: Learning to Tell Two Spirals Apart. In: Touretzky D, Hinton GE, Sejnowski T (Hrsg.): Proc. Of the 1988 Connectionist Models Summer School, Morgan Kaufmann, 52–59, 1989.Google Scholar
  14. [14]
    Lawrence S, giles CL, Tsoi AC: What size neural network gives optimal generalization? Convergence properties of backpropagation. Technical Report UMIACS-TR-96–22 and CS-TR-3617, Institute for Advanced Computer Studies, University of Maryland, College Park, MD 20742Google Scholar
  15. [15]
    Lengelle R, Denceux T: Training MLPs Layer by Layer Using an Objective Function for Internal Representations. Neural Networks 9 (l): 83–97, 1996.CrossRefGoogle Scholar
  16. [16]
    Looney CG: Stabilization and speedup of convergence in training feedforward neural networks. Neurocomputing 10: 7–31, 1996.zbMATHCrossRefGoogle Scholar
  17. [17]
    Magoulas GD, Vrahatis MN, Androulakis GS: Effective Backpropagation Training with Variable Stepsize. Neural Networks 10 (l): 69–82, 1997.CrossRefGoogle Scholar
  18. [18]
    Osowski S, Bojarczak P, Stodolski M: Fast Second Order Learning Algorithm for Feedforward Multilayer Neural Networks and ist Applications. Neural Networks 9 (9): 1583–1596, 1996.CrossRefGoogle Scholar
  19. [19]
    Petrowski A, Dreyfus G, Girault C: Performance Analysis of a Pipelined Backpropagation Parallel Algorithm. IEEE Trans. Neural Networks 4 (6): 970–981, 1993.CrossRefGoogle Scholar
  20. [20]
    Prechelt L: Some notes on neural learning algorithm benchmarking. Neurocomputing 9: 343–347, 1995.CrossRefGoogle Scholar
  21. [21]
    Rathbun TF, Rogers SK, DeSimio MP, Oxley ME: MLP iterative construction algorithm. Neurocomputing 17: 195–216, 1997.CrossRefGoogle Scholar
  22. [22]
    Ridella S, Rovetta S, Zunino R: Circular Backpropagation Networks for Classification. IEEE Trans. Neural Networks 8 (l): 84–97, 1997.CrossRefGoogle Scholar
  23. [23]
    Riedmiller M, Braun H: A direct adaptive method for faster backpropagation learning: the RPROP algorithm, Proc. 1993 IEEE Int. Conf. Neural Networks, vol 1, SanFranzisco 586–591, 1993.Google Scholar
  24. [24]
    Rumelhart DE, Hinton GE, Williams RJ: Learning Representations by Back-Propagating Errors. Nature 323: 533–536, 1986.CrossRefGoogle Scholar
  25. [25]
    Sarle, W.S., ed. (1997), Neural Network FAQ, part 2 of 7: Learning, periodic posting to the Usenet newsgroup comp. ai. neuralnets, URL:ftp://ftp. sas. com/pub/neural/FAQ. htmlGoogle Scholar
  26. [26]
    Sarle WS: Stopped Training and Other Remedies for Overfitting. Proceedings of the 27th Symposium on the Interface of Computer Science and Statistics, 352–360, 1995.Google Scholar
  27. [27]
    Sietsma J, Dow RJF: Creating artificial neural networks that generalize. Neural Networks 2: 67–79, 1991.CrossRefGoogle Scholar
  28. [28]
    Tamura S, Tateishi M: Capabilities of a Four-Layered Feedforward Neural Network: Four Layers Versus Three. IEEE Trans. Neural Networks 8 (2): 251–255, 1997.CrossRefGoogle Scholar
  29. [29]
    Tesauro G, Janssens B: Scaling relationships in back propagation learning. Complex Systems 2: 39–44, 1988.zbMATHGoogle Scholar
  30. [30]
    Vogl TP, Mangis JK, Zigler AK, Zink WT, Alkon DL: Accelerating the convergence of the back-propagation method, Biol. Cybernet. 59 (4/5): 257–264, 1988.CrossRefGoogle Scholar
  31. [31]
    Wagner F, Inst. f. Theoretische Physik, Christian-Albrechts-Universität zu KielGoogle Scholar
  32. [32]
    Weigend A: On overfitting and the effective number of hidden units. Proceedings of the 1993 Connectionist Models Summer School, 335–342, 1994.Google Scholar
  33. [33]
    Yu X-H, Chen G-A, Cheng S-X: Dynamic Learning Rate Optimization of the Backpropagation Algorithm. IEEE Trans. Neural Networks 6 (3): 669–677, 1995.CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1998

Authors and Affiliations

  • R. Linder
    • 1
  • S. J. Pöppl
    • 1
  1. 1.Institut für Medizinische InformatikMedizinischen Universität zu LübeckGermany

Personalised recommendations