Zusammenfassung
Es ist eine grundlegende menschliche Fähigkeit, empirische Erfahrungen in Hypothesen über die Wirklichkeit zu transformieren. Die resultierende Hypothese, gleichgültig ob sie bewußt oder unbewußt vorliegt, repräsentiert das in den Daten angereicherte Wissen in einer kompakteren und verallgemeinerten Form. Der andauernde Prozeß, Hypothesen und empirische Erfahrungen in Einklang zu bringen, ist eine Form des Lernens. Während das menschliche Lernen uns scheinbar mühelos befähigt, sprachliche oder visuelle Begriffe zu erwerben und komplexe motorische Aktionen auszuführen, widersteht es dennoch weitgehend allen Versuchen, es in eine algorithmische Form zu bringen und auf Maschinen zu übertragen.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
Literatur
D. Angluin. Learning regular sets from queries and counterexamples. Information and Control, 75: 87–106, 1987.
D. Angluin. Queries and concept learning. Machine Learning, 2: 319–342, 1988.
S. Annulova, J. Cuellar, K. U. Höffgen, and H. U. Simon. Probably almost optimal neural classifiers. In preparation.
E. B. Baum. Polynomial time algorithms for learning neural nets. In M. A. Fulk and J. Case, editors, Proceedings of the 3rd Annual Workshop on Computational Learning Theory, pages 258–273, San Mateo, California, Aug. 1990. Morgan Kaufmann.
E. B. Baum and D. Haussler. What size net gives valid generalization? Neural Computation1, 1: 151–160, 1989.
A. Blum and R. L. Rivest. Training a 3-node neural network is NP-complete. In Proceedings of the 1st Annual Workshop on Computational Learning Theory, pages 9–18, San Mateo, California, Aug. 1988.
A. Blumer, A. Ehrenfeucht, D. Haussler, and M. K. Warmuth. Learnability and the Vapnik- Chervonenkis dimension. Journal of the Association on Computing Machinery, 36 (4): 929–965, Oct. 1989.
R. O. Duda and P. E. Hart. Pattern Classification and Scene Analysis. Wiley-Inter science. John Wiley & Sons, New York, 1973.
H. Edelsbrunner. Algorithms in Combinatorial Geometry, volume 10 of EATCS Monographs on Theoretical Computer Science. Springer Verlag, Berlin, 1987.
A. Ehrenfeucht, D. Haussler, M. Kearns, and L. Valiant. A general lower bound on the number of examples needed for learning. Information and Computation, 82 (3): 247–261, Sept. 1989.
P. Fischer, S. Polt, and H. U. Simon. Probably almost bayes decisions. In Proceedings of the 4th Annual Workshop on Computational Learning Theory, San Mateo, California, Aug. 1991. To appear.
M. R. Garey and D. S. Johnson. Computers and Intractability: A Guide to the Theory of NP-Completeness. Freeman, San Francisco, 1979.
D. Haussler. Generalizing the pac model: Sample size bounds from metric-dimension based uniform convergence results. In Proceedings of the 30’th Annual Symposium on the Foundations of Computer Science, pages 40–46, Los Alamitos, CA, Oct. 1989. IEEE Computer Society, Computer Society Press.
K. U. Hoffgen and H. U. Simon. Computationally hard consistency problems. In preparation.
M. J. Kearns and R. E. Schapire. Efficient distribution-free learning of probabilistic concepts. In Proceedings of the 31’th Annual Symposium on the Foundations of Computer Science, pages 382–392, Los Alamitos, CA, Oct. 1990. IEEE Computer Society, Computer Society Press.
W. Maass and G. Turan. On the complexity of learning from counterexamples. In Proceedings of the 30th Symposium on Foundations of Computer Science, pages 262–267. IEEE Computer Society, Oct. 1989.
W. Maass and G. Turan. On the complexity of learning from counterexamples and membership queries. In Proceedings of the 31st Symposium on Foundations of Computer Science, pages 203–211. IEEE Computer Society, Oct. 1990.
N. Megiddo. On the complexity of polyhedral separability. Discrete Combinatorial Geometry, 3: 325–337, 1988.
D. Pollard. Convergence of Stochastic Processes. Springer Verlag, 1984.
F. Rosenblatt. Principles and Neurodynamics: Perceptrons and the Theory of Brain Mechanisms. Spartan Books, Washington, D.C., 1962.
H. U. Simon. On the number of examples and stages needed for learning decision trees. In M. A. Fulk and J. Case, editors, Proc. of the 3rd Annual Workshop on Computational Learning Theory, pages 303–314, Palo Alto, California, Aug. 1990. Morgan Kaufmann. Also to appear in IPL.
L. G. Valiant. A theory of the learnable. Communications of the ACM, 27 (11): 1134–1142, Nov. 1984.
V. N. Vapnik. Estimation of Dependencies Based on Empirical Data. Springer Verlag, 1982.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1991 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Simon, H.U. (1991). Algorithmisches Lernen auf der Basis empirischer Daten. In: Brauer, W., Hernández, D. (eds) Verteilte Künstliche Intelligenz und kooperatives Arbeiten. Informatik-Fachberichte, vol 291. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-76980-1_44
Download citation
DOI: https://doi.org/10.1007/978-3-642-76980-1_44
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-54617-7
Online ISBN: 978-3-642-76980-1
eBook Packages: Springer Book Archive