Advertisement

Literatur

  • Heinrich Braun

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Literatur

  1. E.H.L. Aarts, P.J.M. Van Laarhoven. Statistical cooling: a general approach to combinatorial optimization problems. Philips J. of Research, 40:193–226,1985.Google Scholar
  2. E.H.L. Aarts, J.H.M. Korst. Simulated annealing and Boltzmann machines. Wiley, Chichester, 1989.zbMATHGoogle Scholar
  3. D.H. Ackley, G.E. Hinton, T.J. Sejnowski. A learning algorithm for the Boltzmann machines. Cognitive Science 9: 147–169, 1985. Auch In: Anderson, Rosenfeld (eds.), Neurocomputing: Foundations of Research, MIT Press, 1988.CrossRefGoogle Scholar
  4. J.T. Alander. An indexed bibliography of genetic algorithms and neural networks. Report Series No. 94–1-NN, Department of Information Technology and Production Economics, University of Vaasa, Erhältlich über Ftp: ftp.uwasa.fi, Verzeichnis: cs/report94-l, Datei:gaNNbib.ps.Z, 1996.Google Scholar
  5. M. Albrecht. Ein Vergleich neuronaler Modelle zur Lösung komplexer Zuordnungsprobleme am Beispiel der Schulstundenplanung. Diplomarbeit an der Universität Karlsruhe, Institut für Logik, Komplexität und Deduktionssysteme, 1993.Google Scholar
  6. E. Allender. A note on the power of threshold circuits. In 30th Annual Symposium on Foundation of Computer Science, pages 580–584. IEEE Computer Society Press, 1989.Google Scholar
  7. J. A. Anderson. Neural models with cognitive implications. In: LaBerge, Samuelson (ed.), Basic Processes in Reading Perception and Comprehension, Erlbaum, Hillsdale, NJ, 1977.Google Scholar
  8. P. Arena, R. Caponetto, L. Fortuna, M.G. Xibilia. Genetic algorithms to select optimal neural network topology. Proceedings of the 35th Midwest Conference on Circuits and Systems, 1381— 1383,1992.Google Scholar
  9. W.R. Ashby. Design for a brain. Wiley, New York, 1960.zbMATHGoogle Scholar
  10. A.G. Barto, S.J. Bradtke, S.P. Singh. Learning to act using real-time dynamic programming. Artificial Intelligence 72: 81–138, 1995.CrossRefGoogle Scholar
  11. R.K. Belew, J. Mclnerney, N.N. Schraudolph. Evolving networks: Using genetic algorithms with connectionist learning. Technical Report CS90–174, Computer Science and Engineering Department, UCSD (La Jolla), 1990.Google Scholar
  12. D.P. Bertsekas. Dynamic Programming: Deterministic and stochastic models. Prentice-Hall, Englewood Cliffs, NJ, USA, 1989.Google Scholar
  13. A. Blum, R.L. Rivest. Training a 3-node neural network is NP-complete. Neural Information Processing Systems 1,494–501. Morgan Kaufmann, 1989.Google Scholar
  14. A. Blum, R.L. Rivest. Training a 3-node neural network is NP-complete. Neural Networks, 5(1): 117–227,1992.CrossRefGoogle Scholar
  15. J. Branke. Evolutionary Algorithms for neural network design and training. Proceedings of the first Nordic Workshop on Genetic Algorithms and its Applications, Vaasa, Finland, 1995. Ebenfalls als Technical Report No. 322, Institute AIFB, Universität Karlsruhe, 1995.Google Scholar
  16. H. Braun. Massiv parallele Algorithmen zur Optimierung kombinatorischer Optimierungsprobleme. Dissertation an der Universität Karlsruhe, 1990.Google Scholar
  17. H. Braun. Theorie neuronaler Netze. Manuskript zur Vorlesung. Universität Karlsruhe, 1991a.Google Scholar
  18. H. Braun. On solving traveling salesman problems by genetic algorithms. Proceedings of the Int. Conf. Parallel Problem Solving from Nature PPSN91, Springer Lecture Notes in Computer Science 496: 128–132, 1991b.Google Scholar
  19. H. Braun. Evolution - a Paradigm for Constructing Intelligent Agents. Proceedings of the ZiFFG Conference: Prerational Intelligence - Phenomenology of Complexity Emerging in Systems of Simple Interacting Agents, 1994.Google Scholar
  20. H. Braun. On optimizing large neural networks (multilayer perceptrons) by learning and evolution. International Congress on Industrial and Applied Mathematics ICIAM 95, also to be published in Zeitschrift für angewandte Mathematik und Mechanik ZAMM 1996Google Scholar
  21. H. Braun. On solving traveling salesman problems by genetic algorithms, Proceedings of the Int. Conf. Parallel Problem Solving from Nature PPSN91, Springer Lecture Notes in Computer Science 496, S. 128–132,1991.Google Scholar
  22. H. Braun, J. Feulner, V. Ullrich. Learning strategies for solving the problem of planning using backpropagation, Proceedings of NEURO-Nimes 91, 4th Int. Conf. on Neural Networks and their Applications, 1991.Google Scholar
  23. H. Braun, T. Müller. Enhancing Marr’s cooperative algorithm, Proceedings of the Int. Neural Network Conference, S. 38–41, 1990.Google Scholar
  24. H. Braun, K. H. Preut, M. Höhfeld. Optimierung von Neuro-Fuzzy-Netzwerken mit evolutionären Strategien. Proceedings of 3. Workshop Fuzzy-Neuro-Systeme ’95, Gl Tagung, Darmstadt, 1995.Google Scholar
  25. H. Braun, T. Ragg. ENZO, Evolution of Neural Networks. User Manual and Implementation Guide, Version 1.0. erhältlich über FTP: illftp.ira.uka.de, Verzeichnis: /pub/neuro/ENZO, 1995.Google Scholar
  26. H. Braun, J. Weisbrod. Evolving neural networks for application oriented problems. Proceedings of the second annual conference on evolutionary programming, S. 62–71, 1993.Google Scholar
  27. H. Braun, J. Weisbrod. Evolving neural feedforward networks. Proceedings of the International Conference Artificial Neural Nets and Genetic Algorithms ICANNGA93, S. 18–24, Springer, 1993.Google Scholar
  28. H. Braun, P. Zagorski. ENZO-M - a Hybrid Approach for Optimizing Neural Networks by Evolution and Learning, Proceedings of the International. Conference on Evolutionary Computation PPSN III, 1994.Google Scholar
  29. H. Braun, P. Zagorski. ENZO-M - a Powerful Design Tool to Evolve Multilayer Feedforward Networks. Proceedings of the IEEE World Congress on Computational Intelligence ICEC 1994.Google Scholar
  30. H.J. Bremermann. Optimization through evolution and recombination. In: Yovitis, Jacobi, Goldstein (eds.), Self-organizing Systems, Spartan Press, Washington, 1962.Google Scholar
  31. G.A. Carpenter, S. Grossberg. The ART of adaptive pattern recognition by a self-organizing neural Network. Computer, March 1988, 77–88,1988.Google Scholar
  32. A. K. Chandra, L. J. Stockmeyer, U. Vishkin. Constant depth reducibility. SIAM Journal on Computing, 13(2):423–439, May 1984.MathSciNetzbMATHCrossRefGoogle Scholar
  33. J.-P. Changeux, P. Courrege, A. Danchin. A theory of the epigenesis of neural networks by selective stabilization of synapses. In: Proceedings of the National Academy of Sciences USA 70,10: 2974–2978,1973.MathSciNetzbMATHCrossRefGoogle Scholar
  34. H. Christophel. Optimieren neuronaler Bewertungsmodelle mit Hilfe von TD-Lernen und Evolution. Diplomarbeit an der Universität Karlsruhe, Institut für Logik, Komplexität und Deduktionssysteme, 1995.Google Scholar
  35. A. Church. The calculi of lambda-conversion. Annals of Mathematical Studies, 6, 1941.Google Scholar
  36. Charles Darwin. The origin of species by means of natural selection, or the preservation of favoured races in the struggle for life. Penguin Books, London, 1859:Google Scholar
  37. P. Dayan. The convergence of TD (1) for general 1. Machine Learning, Vol. 8: 241–362, 1992.Google Scholar
  38. A. Dold. Inkrementelle Verbesserung neuronaler Strategien mittels Einbindung symbolischer Ansätze am Beispiel des Mühleendspiels. Studienarbeit an der Universität Karlsruhe, Institut für Logik, Komplexität und Deduktionssysteme, 1992.Google Scholar
  39. S. Dominic, D. Whitley, R. Das. Genetic reinforcement learning for neural networks. Proceedings of the International Joint Conference on Neural Networks IJCNN 91, Vol. 2, 71–76, Seattle, New York, 1991.Google Scholar
  40. G. Dück, T. Scheuer. Threshold Accepting: A general purpose optimization algorithm appearing superior to simulated annealing. Journal of Computational Physics, Vol.: 90: 161–175, 1990.MathSciNetzbMATHCrossRefGoogle Scholar
  41. G. Dück. New optimization heuristics - the great deluge algorithm and the record-to-record travel. Journal of Computational Physics, Vol.: 104: 86–92, 1993.zbMATHCrossRefGoogle Scholar
  42. G.M. Edelman. Neural Darwinism. New York, Basic Books, 1987.Google Scholar
  43. S.E. Fahlmann. Fast-learning variations on backpropagation: an empirical study. In: Proceedings of the 1988 Connectionist Models Summer School (Pittsbourgh 1988), ed. Touretzky, 524–532. San Mateo: Morgan Kaufmann, 1988.Google Scholar
  44. D.B. Fogel. An evolutionary approach to the travelling salesman problem. Biological Cybernetics 63: 11–114, 1988.MathSciNetGoogle Scholar
  45. D.B. Fogel. Evolving artificial intelligence. Dissertation, University of California, San Diego, 1992.Google Scholar
  46. D.B. Fogel. On the philosophical differences between evolutionary algorithms and genetic algorithms. In: D.B. Fogel, W. Atmar (eds.), Proceedings of the Second Annual Conference of Evolutionary Programming, San Diego, CA, Evolutionary Programming Society, 1993.Google Scholar
  47. D.B. Fogel, W. Atmar. Proceedings of the First Annual Conference of Evolutionary Programming, San Diego, CA, Evolutionary Programming Society, 1992.Google Scholar
  48. D.B. Fogel, L.J. Fogel, V. Porto. Evolving neural networks. Biological Cybernetics 63: (6): 487–493, 1990.CrossRefGoogle Scholar
  49. L.J. Fogel, A.J. Owens, M.J. Walsh. Artificial intelligence through a simulation of evolution. In: Maxfield, Callahan, Fogel (eds.), Biophysics and cybernetic systems, Spartan, Washington, 1965.Google Scholar
  50. L.J. Fogel, A.J. Owens, M.J. Walsh. Artificial intelligence through a simulation of evolution. Wiley, New York, 1966.Google Scholar
  51. F. Fogelman, E. Goles, G. Weisbuch. Transient length in sequential iterations of threshold functions. Discr. Appl. Math. 6:95–98, 1983.MathSciNetzbMATHCrossRefGoogle Scholar
  52. R.M. Friedberg. A learning Machine: Part I. IBM Journal of Research and Development 2: 2–13, 1958.MathSciNetCrossRefGoogle Scholar
  53. R.M. Friedberg, B. Dunham, J.H. North. A learning Machine: Part II. IBM Journal of Research and Development 3: 282–287, 1959.MathSciNetCrossRefGoogle Scholar
  54. B. Fritzke. Growing cell structures - a self organizing network in k dimensions. Artificial Neural Networks II, Aleksander, Taylor (eds.), North Holland, 1051–1056, 1992.Google Scholar
  55. B. Fritzke. Kohonen feature maps and growing cell structures - a performance comparison. In: Giles, Hanson, Cowan (eds.), Advances in Neural Information Processing Systems 5, Morgan Kaufmann, 1993.Google Scholar
  56. M. Fürst, J. B. Saxe, M. Sipser. Parity, circuits and the polynomial time hierarchy. Mathematical Systems Theory, 17(1):13–27,1984.MathSciNetCrossRefGoogle Scholar
  57. E.Gardner. The space of interactions in neural network models. Journal of Physics A 21, 257, 1988.CrossRefGoogle Scholar
  58. R. Gasser, J. Nievergelt. Es ist entschieden: Das Mühlespiel ist unentschieden. Informatik Spektrum, 17: 314–317, 1994.Google Scholar
  59. K. Gödel. Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme, I, Monats. Math. Phys. 38: 173–198, 1931. In English translation: K. Gödel. On formally undecidable propositions of principia mathematica and related systems. Translated by B.Meitzer. Basic Books, Inc. Publishers, New York.CrossRefGoogle Scholar
  60. D.E. Goldberg. Genetic Algorithms in Search, Optimization and Machine Learning. Addison-Wesley, Reading, MA, 1989.zbMATHGoogle Scholar
  61. M. Goldmann, J. Hastad, A. Razborov. Majority gates vs. general weighted threshold gates. In Proc. 7th Annual Structure in Complexity Theory Conference, pages 2–13. IEEE Computer Society Press, 1992.Google Scholar
  62. E.Goles, J. Olivos. The convergence of symmetric threshold automata. Info, and Control 51: 98–104, 1981.zbMATHCrossRefGoogle Scholar
  63. M. Grötschel, O. Holland. Solution of large-scale symmetric traveling salesman problems. Math. Programming, 1989.Google Scholar
  64. S. Grossberg. The adaptive brain I/II, Elsevier, Amsterdam, 1987.Google Scholar
  65. S. Grossberg. Adaptive pattern classification and universal recoding: I. Parallel development and coding of neural feature detectors. Biological Cybernetics 23: 121–134. 1976. Auch in: Anderson, Rosenfeld (eds.), Neurocomputing: Foundations of Research, 245–258, MIT Press, 1976.MathSciNetzbMATHCrossRefGoogle Scholar
  66. B. Hajek. Cooling schedules for optimal annealing. MOR 13: 311–329,1988.MathSciNetzbMATHGoogle Scholar
  67. A. Hajnal, W. Maass, P. Pudläk, M. Szegedy, G. Turan. Threshold circuits of bounded depth. In 28th Annual Symposium on Foundations of Computer Science, pages 99–110. IEEE Computer Society Press, October 1987.Google Scholar
  68. A. Haken. Connectionist networks that need exponential time to stabilize. Unpublished manuscript. Dept. of Computer Science, University of Toronto, 1989.Google Scholar
  69. A.Haken, M. Luby. Steepest descent can take exponential time for symmetric connection networks. Complex Systems 2 (1988), 191–196.zbMATHGoogle Scholar
  70. P.J.B. Hancock. Genetic algorithms and permutation problems: a comparison of recombination operators for neural structure specification. In: Whitley, Schaffer (eds.), Combinations of Genetic Algorithms and Neural Networks, IEEE Computer Society Press, 1992.Google Scholar
  71. S. Harp, T. Samad. Genetic synthesis of neural network architecture. In: Davis (ed.), Handbook of Genetic Algorithms, 203–221, Van Nostrand Reinhold, New York, 1991.Google Scholar
  72. S. Harp, T. Samad, A. Guha. Towards the genetic synthesis of neural networks, Proceedings of the third International Conference on Genetic Algorithms, Morgan Kaufman, San Mateo, CA, 1989.Google Scholar
  73. R. Hartley, H. Szu. A comparison of the computational power of neural networks. Proc. of the 1987 Int. Conf. on Neural Networks, Vol.3, IEEE, New York, 15–22,1987.Google Scholar
  74. J. Hartroth. The Truck Backer-Upper: Anwendung eines rückgekoppelten Backpropagation-Netzes. Studienarbeit an der Universität Karlsruhe, Institut für Logik, Komplexität und Deduktionssysteme, 1991.Google Scholar
  75. B.Hassibi, D.G. Storck. Second order derivatives for network pruning: Optimal brain surgeon. In: Hansen, Cowan, Giles (eds.), Advances in Neural Information Processing 5 (NIPS-5), Morgan Kauffmann, 1993.Google Scholar
  76. J. Hastad. On the size of weights for threshold gates. Unpublished Manuscript, 1992.Google Scholar
  77. G.E. Hinton, T.J. Sejnowski. Learning and relearning in Boltzmann machines. In: Rumelhart, McClelland (eds.), Parallel Distributed Processing, Vol. 1 (Kap.7), MIT Press, Cambridge, 1986.Google Scholar
  78. J.H. Holland. Adaptation in natural and artificial systems, University of Michigan Press, 1975.Google Scholar
  79. J. Hong. Computation: Computability, Similarity and Duality. Pitman Publishing, London, 1986.zbMATHGoogle Scholar
  80. J. Hong. On connectionist models. Technical Report 87–012, Dept. of Computer Science, Univ. of Chicago, June 1987.Google Scholar
  81. J. J. Hopfield. Neural networks and physical systems with emergent collective computational abilities. Proc. National Academy of Sciences, 79:2554–2558, April 1982.MathSciNetCrossRefGoogle Scholar
  82. J.J. Hopfield, D.W. Tank. “Neural” computation of decisions in optimization problems. Biological Cybernetics 52: 141–152, 1985.MathSciNetzbMATHGoogle Scholar
  83. R.A. Jacobs. Increased rates of convergence through learning rate adaptation. Neural Networks 1: 295–307, 1988.CrossRefGoogle Scholar
  84. R.E. Jenkins, B.P. Yuhas. A simplified neural network solution through problem decomposition: The case of the truck backer-upper. Neural Computation 4:647–649, 1992.CrossRefGoogle Scholar
  85. M.I. Jordan, R.A. Jacobs. Hierarchies of adaptive experts. In: Moody, Hanson, Lippmann (Eds.). Advances in Neural Information Processing (NIPS) 4, Morgan Kaufmann, 1992.Google Scholar
  86. J.S. Judd. On the complexity of loading shallow neural networks. Journal of Complexity, 4: 177— 192, 1988.MathSciNetzbMATHCrossRefGoogle Scholar
  87. J.S. Judd. Neural Network Design and the Complexity of learning. MIT Press, 1990.Google Scholar
  88. N. Karmarkar. A new polynomial-time algorithm for linear programming. Combinatorica, 4: 373–395,1984.MathSciNetzbMATHCrossRefGoogle Scholar
  89. S. Kirkpatrick, C.D. Gelatt Jr., M.P. Vecchi. Optimization by simulated annealing. Science 220, 671–680, 1983.MathSciNetCrossRefGoogle Scholar
  90. H. Kitano. Designing neural networks using genetic algorithm with graph generation system. Complex Systems 4:461–476, 1990.zbMATHGoogle Scholar
  91. S.C. Kleene. Representation of events in nerve nets and finite automata. In: Automata Studies (C.E. Shannon, J. McCarthy, eds.). Annals of Mathematics Studies 34:3–41. Princeton Univ. Press, 1956.Google Scholar
  92. T. Kohonen. Self-organized formation of topologically correct feature maps. Biological Cybernetics 43: 59–69, 1982. Auch in: Anderson, Rosenfeld (eds.), Neurocomputing: Foundations of Research, MIT Press, 1988.CrossRefGoogle Scholar
  93. T. Kohonen. Self-organization and associative memory, Springer-Verlag, Berlin, 1989.Google Scholar
  94. D. Koll. Untersuchung effizienter Methoden der Parallelisierung neuronaler Netze auf SIMD-Rechnern. Diplomarbeit an der Universität Karlsruhe, Institut für Logik, Komplexität und Deduktionssysteme, 1994.Google Scholar
  95. D. Koll, M. Riedmiller, H. Braun. Massively Parallel Training of Multi Layer Perceptrons with Irregular Topologies. Proceedings of the International Conference on Artificial Neural Networks and Genetic Algorithms ICANNGA95, Springer, 1995.Google Scholar
  96. J.R. Koza. Evolution and co-evolution of computer programs to control independently-acting agents. In: Meyer, Wilson (eds.), From Animals to Animats, Proceedings of the First International Conference on Simulation of Adaptive Behavior, Cambridge, MA, MIT-Press, 1991.Google Scholar
  97. J.R. Koza. Genetic Programming. Cambridge, MA, MIT-Press, 1993.Google Scholar
  98. H. Lawitzke. Optimieren mit selbstorganisierenden Karten. Diplomarbeit an der Universität Karlsruhe, Institut für Logik, Komplexität und Deduktionssysteme, 1991.Google Scholar
  99. P.J.M van Laarhoven, E.H.L. Aarts. Simulated Annealing: Theory and Applications. Kluwer, Dordrecht, 1989.Google Scholar
  100. Y. LeCun, J.S. Denker, S.A. Solla. Optimal Brain Damage. In: Touretzky (ed.), Advances in Neural Information Processing Systems 2 (NIPS-2), 598–605, Morgan Kaufmann, 1990.Google Scholar
  101. J.-H. Lin, J.S. Vitter. Complexity results on learning by neural nets. Machine Learning, 6: 211–230, 1991.Google Scholar
  102. W. Maass, G. Schnitger, E. Sontag. On the computational power of sigmoid versus Boolean threshold circuits. Proc. of the 32 Ann. IEEE Symp. on Foundations of Computer Science. IEEE, New York, 767–776, 1991.Google Scholar
  103. V. Maniezzo. Genetic Evolution of the topology and weight distribution of neural networks. IEEE Transactions on Neural Networks, 5(1): 39–53, 1994.CrossRefGoogle Scholar
  104. W. S. McCulloch, W. Pitts. A logical calculus of ideas immanent in nervous activity. Bulletin of Mathematical Biophysics, 5:115–133, 1943.MathSciNetzbMATHCrossRefGoogle Scholar
  105. J.R. McDonnell, D.E. Waagen. Neural network structure design by evolutionary programming. Proceedings of the Second Annual Conference on Evolutionary Programming, 79–89, San Diego, CA 92121,1993.Google Scholar
  106. J.R. McDonnell, D.E. Waagen. Evolving recurrent perceptrons for time series modelling. IEEE Transactions on Neural Networks, 5(1): 24–38,1994.CrossRefGoogle Scholar
  107. G.H. Mealey. Method for synthesizing sequential circuits. Bell System Tech. J. 34: 1045–1079, 1955.MathSciNetGoogle Scholar
  108. G.F. Miller, P.M. Todd, S.U. Hedge. Designing neural networks using genetic algorithms. Proceedings of the third International Conference on Genetic Algorithms, 379–384, Arlington, 1989.Google Scholar
  109. M. Minsky, S. Papert. Perceptrons. MIT-Press, 1969.zbMATHGoogle Scholar
  110. D.J. Montana, L.Davis. Training feedforward neural networks using genetic algorithms. Proceedings of the International Joint Conference on Artificial Intelligence, 762–767, 1989.Google Scholar
  111. J. Moody, C. Darken. Learning with localized receptive fields. In: Proceedings of the 1988 Connectionist Summer School, Touretzky, Hinton, Sejnowski (eds.) 133–143, San Mateo, Morgan Kaufmann, 1988.Google Scholar
  112. D.F. Moore. Gedanken-Experiments on sequential machines. In: C.E. Shannon, J. McCarthy, Automata Studies, Ann. Math. Studies 34, Princeton University Press, 1956.Google Scholar
  113. S. Muroga, I. Toda, S. Takasu. Theory of majority decision elements. J. Franklin Inst., 271:376–418, May 1961.MathSciNetzbMATHCrossRefGoogle Scholar
  114. J. von Neumann. The general and the logical theory of automata. Cerebral Mechanisms in Behavior: The Hixon Symposium (L.A. Jeffress, Ed.), Wiley, 1–32,1951.Google Scholar
  115. D. Nguyen. Applications of neural networks in adaptive control. Dissertation, Stanford University, 1991.Google Scholar
  116. D. Nguyen, B. Widrow. The truck backer-upper: An example of self-learning in neural networks. In: R. Eckmiller (ed.), Advanced Neural Computers, North Holland, 1990.Google Scholar
  117. S. Nolfi, J.L. Elman, D. Parisi. Learning and evolution in neural networks. CRL Technical Report 9019, La Jolla, CA: University of California at San Diego, 1990.Google Scholar
  118. M.Opper, W. Kinzel, J. Kleinz, R. Nehl. On the ability of the optimal perception to generalize. Journal of Physics A23, L 581–586, 1990.MathSciNetGoogle Scholar
  119. M. Padberg, G. Rinaldi. Optimization of a 532-city symmetric traveling salesman problem by branch and cut. Operations Research Letters, 6: 1–7, 1987.MathSciNetzbMATHCrossRefGoogle Scholar
  120. C.H. Papadimitriou, K. Steiglitz. Combinatorial optimization: algorithms and complexity. Prentice Hall, New Jersey, 1982.zbMATHGoogle Scholar
  121. I. Parberry. Circuit complexity and neural networks. The MIT Press, 1994.zbMATHGoogle Scholar
  122. L. Prechelt. Proben 1 - a set of neural network benchmark problems and benchmarking rules. Technical Report 21/94, Universität Karlsruhe, Fakultät für Informatik, 1994.Google Scholar
  123. K.-H. Preut. Strukturoptimierung von Neuro-Fuzzy-Systemen. Diplomarbeit an der Universität Karlsruhe, Institut für Logik, Komplexität und Deduktionssysteme, 1995.Google Scholar
  124. U. Pütz. Evolutionäre Optimierung neuronaler Netze für Reinforcement-Probleme. Diplomarbeit an der Universität Karlsruhe, Institut für Logik, Komplexität und Deduktionssysteme, 1995.Google Scholar
  125. N.J. Radcliffe. Genetic set recombination and its application to neural network topology optimization. 91– Technical report EPCC-TR-;21, University of Edinburgh, Scotland, 1991.Google Scholar
  126. T. Ragg, H. Braun, J. Feulner. Learning optimal winning strategies through experience using temporal difference methods. Proceedings of the Int. Conf. on Artificial Neural Networks ICANN95,1995.Google Scholar
  127. I. Rechenberg. Cybernetic solution path of an experimental problem. Royal Aircraft Establishment, Library Translation 1122, Farnborough, Hants, Aug. 1965 (Englische Übersetzung einer unveröffentlichten Kurzfassung der Vorlesung “Kybernetische Lösungsansteuerung einer experimentellen Forschungsaufgabe”, angefertigt anläßlich der gemeinsamen Jahrestagung der Wissenschaftlichen Gesellschaft für Luft- und Raumfahrt und der Deutschen Gesellschaft für Raketentechnik und Raumfahrt).Google Scholar
  128. I. Rechenberg. Evolutionsstrategie - Optimierung technischer Systeme nach den Prinzipien der biologischen Evolution. Frommann-Holzboog, Stuttgart, 1973.Google Scholar
  129. I. Rechenberg. Evolutionsstrategie’ 94. Frommann-Holzboog, Stuttgart, 1994.Google Scholar
  130. M. Riedmiller. Schnelle adaptive Lernverfahren für mehrschichtige Feedforward-Netzwerke - Vergleich und Weiterentwicklung. Diplomarbeit an der Universität Karlsruhe, Institut für Logik, Komplexität und Deduktionssysteme, 1992.Google Scholar
  131. M. Riedmiller. Advanced supervised learning in multilayer perceptrons - from backpropagation to adaptive learning algorithms. Computer Standards & Interfaces 16: 265–278, 1994.CrossRefGoogle Scholar
  132. M. Riedmiller. Learning to control dynamic systems. Proc. of European Meeting on Cybernetics and System Research EMCSR, Vienna, 1996.Google Scholar
  133. M. Riedmiller. Selbständig lernende neuronale Steuerungen. Dissertation an der Universität Karlsruhe, Institut für Logik, Komplexität und Deduktionssysteme, 1996.Google Scholar
  134. M. Riedmiller, H. Braun. RPROP: A Fast Adaptive Learning Algorithm. International Symposium on Computer and Information Science VII, S. 279–286, 1992.Google Scholar
  135. M. Riedmiller, H. Braun. RPROP: A Fast and Robust Backpropagation Learning Strategy. Fourth Australian Conference on Neural Networks, S. 169–172, 1993a.Google Scholar
  136. M. Riedmiller, H. Braun. A Direct Adaptive Method for Faster Backpropagation Learning: The RPROP Algorithm. Proceedings of the IEEE International Conference on Neural Networks (ICNN), S. 586–591,1993b.Google Scholar
  137. H. Ritter, T. Martinetz, K. Schulten. Neuronale Netze: Eine Einführung in die Theorie selbstorganisierender Netzwerke. Addison-Wesley, 1990.Google Scholar
  138. H. Ritter, K. Schulten. Topology conserving mappings for learning motor tasks. In: Dencker (ed.), Neural Networks for Computing, AIP Conf. Proceedings 151, Snowbird, Utah, 393–406, 1986.Google Scholar
  139. P. Robbins, A. Soper, K. Rennols. Use of genetic algorithms for optimal topology determination in back propagation neural networks. Proceedings of the International Conference on Artificial Neural Networks and Genetic Algorithms, 726–730, Springer-Verlag, 1993.Google Scholar
  140. F. Rosenblatt. The perceptron: A probabilisitic model for information storage and organization in the brain. Psychological Review. 65: 386–408, 1958.MathSciNetCrossRefGoogle Scholar
  141. S. Ross. Introduction to stochastic dynamic programming. Academic Press, New York, USA, 1983.zbMATHGoogle Scholar
  142. D.E. Rumelhart, G.E. Hinton, R J. Williams. Learning internal representations by error propagation. In: Rumelhart, McClelland (eds.), Parallel Distributed Processing, Vol.1 (chap.5), MIT Press, Cambridge, MA, 1986.Google Scholar
  143. D.E. Rumelhart, P. Smolensky, J.L. McClelland, G.E. Hinton. Schemata and sequential thought processes in PDP models. In: Rumelhart, McClelland (eds.), Parallel Distributed Processing, Vol.2 (chap. 14), MIT Press, Cambridge, MA, 1986.Google Scholar
  144. J. Schäfer. Evolution neuronaler Netze zur Erkennung handgeschriebener Ziffern. Diplomarbeit an der Universität Karlsruhe, Institut für Logik, Komplexität und Deduktionssysteme, 1994.Google Scholar
  145. J. Schäfer, H. Braun. Optimizing Classifiers for Handwritten Digits by Genetic Algorithms. Proceedings of the International Conference on Artificial Neural Networks and Genetic Algorithms ICANNGA95, Springer, 1995.Google Scholar
  146. J.D. Schaffer, D. Whitley, L.J. Eshelman. Combination of genetic algorithms and neural networks: a survey of the state of the art. In Whitley, Schaffer (eds.), Proceedings of the International Workshop on Combinations of Genetic Algorithms and Neural Networks, 1–37,1992.CrossRefGoogle Scholar
  147. W. Schiffmann, M. Joost, R. Werner. Performance evaluation of evolutionarily created neural network topologies. Proceedings of first International Conference on Parallel Problem Solving from Nature 1990, 274–283, Springer-Verlag, 1990.Google Scholar
  148. W. Schiffmann, M. Joost, R. Werner. Optimization of the backpropagation algorithm for training multilayer perceptrons. Technical Report, Universität Koblenz, Institut für Physik, 1993.Google Scholar
  149. W. Schiffmann, M. Joost, R. Werner. Application of genetic algorithms to the construction of topologies for multilayer perceptrons. Proceedings of the International Conference Artificial Neural Nets and Genetic Algorithms ICANNGA93, S. 675–682, Springer, 1993.Google Scholar
  150. M. Schmitt. Komplexität neuronaler Lernprobleme. Dissertation an der Fakultät für Informatik, Universität Ulm, 1994.Google Scholar
  151. H.-P. Schwefel. Experimentelle Optimierung einer Zweiphasendüse Teil I. Bericht 35 für das Projekt MHD-Staustrahlrohr, AEG Forschungsinstitut, Berlin, Okt. 1968.Google Scholar
  152. H.-P. Schwefel. Evolutionsstrategie und numerische Optimierung. Dissertation an der Technischen Universität Berlin, Abteilung für Prozessautomatisierung, 1975.Google Scholar
  153. H.-P. Schwefel. Evolution and Optimum Seeking. John Wiley & Sons, New York, 1995.Google Scholar
  154. H.T. Sigelman, E.D. Sontag. On the computational power of neural nets. Proc. of the 5thAnn. Workshop on Computational Learning Theory. ACM Press, New York, 440–449,1992.Google Scholar
  155. S.P. Singh. Learning to solve markovian decision problems. IEEE Transactions on Systems, Man, and Cybernetics, Vol.13: 834–846, 1994.Google Scholar
  156. M. Sipser. Borel sets and circuit complexity. In Proceedings of ihe Fifteenth Annual ACM Symposium on Theory of Computing, pages 61–69. ACM Press, 1983.Google Scholar
  157. E.D. Sontag. Feedforward nets for interpolation and classification. J. Comput. Syst. Sei. 45: 20–48,1992.MathSciNetzbMATHCrossRefGoogle Scholar
  158. A. Sprenger. Evolutive Optimierung von Neuro-Fuzzy-Netzen basierend auf Radialen Basisfunktionen. Diplomarbeit an der Universität Karlsruhe, Institut für Logik, Komplexität und Deduktionssysteme, 1996.Google Scholar
  159. A. Stahlberger. Optimal Brain Surgeon, ein Verfahren zum Ausdünnen neuronaler Netze, - Verbesserung und neue Ansätze. Diplomarbeit an der Universität Karlsruhe, Institut für Logik, Komplexität und Deduktionssysteme, 1996.Google Scholar
  160. A. Stahlberger, M. Riedmiller. Fast network pruning and feature extraction by removing complete units. Internat. Conference of Neural Information Processing Systems, NIPS 9, MIT Press, 1996.Google Scholar
  161. R.S. Sutton. Learning to predict by the methods of temporal differences. Machine Learning, 3: 9–44, 1988.Google Scholar
  162. R.S. Sutton. Generalization in Reinforcement Learning: Successful Examples Using Sparse Coarse Coding. Advances in Neural Information Processing Systems 8, MIT Press. 1996.Google Scholar
  163. G.J. Tesauro. Neurogammon wins computer Olympiad. Neural Computation, 1: 321–323, 1989.CrossRefGoogle Scholar
  164. G.J. Tesauro. Practical issues in temporal difference learning. Machine Learning, 8: 257–277, 1992.zbMATHGoogle Scholar
  165. G.J. Tesauro. Temporal difference learning and TD-Gammon. Communications of the ACM, volume 38 number 3, pages 58–68,1995.CrossRefGoogle Scholar
  166. T. Tollenaere. Supersab: Fast adaptive backpropagation with good scaling properties. Neural Networks 3(5), 1990.Google Scholar
  167. N. Trede. The Truck Backer-Upper: Training des Emulators. Studienarbeit an der Universität Karlsruhe, Institut für Logik, Komplexität und Deduktionssysteme, 1991.Google Scholar
  168. N.L. Tu. Optimierung von kombinatorischen Problemen mit neuronalen Netzen. Studienarbeit an der Universität Karlsruhe, Institut für Logik, Komplexität und Deduktionssysteme, 1993.Google Scholar
  169. A. M. Turing. On computable numbers with an application to the Entscheidungsproblem. Proc. London Math. Soc., 2(42):230–265,1936.Google Scholar
  170. T.P. Vogl, J.K. Mangis, A.K. Rigler, W.T. Zink, D.L. Alkon. Accelerating the convergence of the backpropagation model. Biological Cybernetics, 59: 257–263, Springer-Verlag, 1988.CrossRefGoogle Scholar
  171. C.J.C.H. Watkins. Learning from delayed rewards. Ph.D. Thesis, Cambridge University, Cambridge, England, 1989.Google Scholar
  172. G. Weiß. Neural networks and evolutionary computation, part I: hybrid approaches in artificial intelligence. Proceedings of the first IEEE Conference on Evolutionary Computation, 268–277, 1994.Google Scholar
  173. D.Whitley. Genetic algorithms and neural networks. In: Periaux, Winter (eds.), Genetic Algorithms in Engineering and Computer Science, John Wiley & Sons Ltd, 1995.Google Scholar
  174. D. Whitley, R. Das, C.W. Anderson. Genetic reinforcement learning for neuro control problems. Machine Learning 13(2–3): 259–284, 1993.CrossRefGoogle Scholar
  175. D. Whitley, S. Dominic, R. Das. Genetic reinforcement learning with multilayer neural networks. Proceedings of the fourth International Conference on Genetic Algorithms, 562–569, San Diego, Morgan Kaufmann Publishers, 1991.Google Scholar
  176. D. Whitley, T. Hanson. Optimizing neural networks using faster, more accurate genetic search. Proceedings of the 3rd International Conference on Genetic Algorithms, 391–395, 1989.Google Scholar
  177. D. Whitley, T. Starkweather, C. Bogart. Genetic algorithms and neural networks: optimizing connections and connectivity. Parallel Computing 14: 347–361, North Holland, 1990.CrossRefGoogle Scholar
  178. R.A. Wilkinson (ed.). The first census optical character recognition systems conference. National Institute for Standard and Technology NIST ir4912. Available at the NIST-Archive: ftp: sequoya.ucsl.nist.gov, Verzeichnis: pub/NISTIR, 1992.Google Scholar
  179. X. Yao. A review of evolutionary artificial neural networks. International Journal of intelligent Systems, 8 (4): 539–567, 1992.CrossRefGoogle Scholar
  180. X. Yao. Evolutionary artificial neural networks. International Journal of Neural Systems, 4 (3): 202–222, 1993.CrossRefGoogle Scholar
  181. A. Zell. Simulation Neuronaler Netze. Addison-Wesley, 1994.zbMATHGoogle Scholar
  182. A. Zell. SNNS, Stuttgarter Neural Network Simulator. User Manual, Version 4.1, Report No. 6/95, Institute für Paralleles und Verteiltes Rechnen (IPVR), Universität Stuttgart, erhältlich über FTP: ftp.informatik.uni-stuttgart.de, Verzeichnis: /pub/SNNS, 1995.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1997

Authors and Affiliations

  • Heinrich Braun
    • 1
  1. 1.KarlsruheGermany

Personalised recommendations