Skip to main content

Algorithmisches Lernen auf der Basis empirischer Daten

  • Conference paper
  • 198 Accesses

Part of the book series: Informatik-Fachberichte ((2252,volume 291))

Zusammenfassung

Es ist eine grundlegende menschliche Fähigkeit, empirische Erfahrungen in Hypothesen über die Wirklichkeit zu transformieren. Die resultierende Hypothese, gleichgültig ob sie bewußt oder unbewußt vorliegt, repräsentiert das in den Daten angereicherte Wissen in einer kompakteren und verallgemeinerten Form. Der andauernde Prozeß, Hypothesen und empirische Erfahrungen in Einklang zu bringen, ist eine Form des Lernens. Während das menschliche Lernen uns scheinbar mühelos befähigt, sprachliche oder visuelle Begriffe zu erwerben und komplexe motorische Aktionen auszuführen, widersteht es dennoch weitgehend allen Versuchen, es in eine algorithmische Form zu bringen und auf Maschinen zu übertragen.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   54.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   69.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Literatur

  1. D. Angluin. Learning regular sets from queries and counterexamples. Information and Control, 75: 87–106, 1987.

    MathSciNet  MATH  Google Scholar 

  2. D. Angluin. Queries and concept learning. Machine Learning, 2: 319–342, 1988.

    Google Scholar 

  3. S. Annulova, J. Cuellar, K. U. Höffgen, and H. U. Simon. Probably almost optimal neural classifiers. In preparation.

    Google Scholar 

  4. E. B. Baum. Polynomial time algorithms for learning neural nets. In M. A. Fulk and J. Case, editors, Proceedings of the 3rd Annual Workshop on Computational Learning Theory, pages 258–273, San Mateo, California, Aug. 1990. Morgan Kaufmann.

    Google Scholar 

  5. E. B. Baum and D. Haussler. What size net gives valid generalization? Neural Computation1, 1: 151–160, 1989.

    Article  Google Scholar 

  6. A. Blum and R. L. Rivest. Training a 3-node neural network is NP-complete. In Proceedings of the 1st Annual Workshop on Computational Learning Theory, pages 9–18, San Mateo, California, Aug. 1988.

    Google Scholar 

  7. A. Blumer, A. Ehrenfeucht, D. Haussler, and M. K. Warmuth. Learnability and the Vapnik- Chervonenkis dimension. Journal of the Association on Computing Machinery, 36 (4): 929–965, Oct. 1989.

    MathSciNet  MATH  Google Scholar 

  8. R. O. Duda and P. E. Hart. Pattern Classification and Scene Analysis. Wiley-Inter science. John Wiley & Sons, New York, 1973.

    Google Scholar 

  9. H. Edelsbrunner. Algorithms in Combinatorial Geometry, volume 10 of EATCS Monographs on Theoretical Computer Science. Springer Verlag, Berlin, 1987.

    Google Scholar 

  10. A. Ehrenfeucht, D. Haussler, M. Kearns, and L. Valiant. A general lower bound on the number of examples needed for learning. Information and Computation, 82 (3): 247–261, Sept. 1989.

    Article  MathSciNet  MATH  Google Scholar 

  11. P. Fischer, S. Polt, and H. U. Simon. Probably almost bayes decisions. In Proceedings of the 4th Annual Workshop on Computational Learning Theory, San Mateo, California, Aug. 1991. To appear.

    Google Scholar 

  12. M. R. Garey and D. S. Johnson. Computers and Intractability: A Guide to the Theory of NP-Completeness. Freeman, San Francisco, 1979.

    MATH  Google Scholar 

  13. D. Haussler. Generalizing the pac model: Sample size bounds from metric-dimension based uniform convergence results. In Proceedings of the 30’th Annual Symposium on the Foundations of Computer Science, pages 40–46, Los Alamitos, CA, Oct. 1989. IEEE Computer Society, Computer Society Press.

    Google Scholar 

  14. K. U. Hoffgen and H. U. Simon. Computationally hard consistency problems. In preparation.

    Google Scholar 

  15. M. J. Kearns and R. E. Schapire. Efficient distribution-free learning of probabilistic concepts. In Proceedings of the 31’th Annual Symposium on the Foundations of Computer Science, pages 382–392, Los Alamitos, CA, Oct. 1990. IEEE Computer Society, Computer Society Press.

    Chapter  Google Scholar 

  16. W. Maass and G. Turan. On the complexity of learning from counterexamples. In Proceedings of the 30th Symposium on Foundations of Computer Science, pages 262–267. IEEE Computer Society, Oct. 1989.

    Chapter  Google Scholar 

  17. W. Maass and G. Turan. On the complexity of learning from counterexamples and membership queries. In Proceedings of the 31st Symposium on Foundations of Computer Science, pages 203–211. IEEE Computer Society, Oct. 1990.

    Chapter  Google Scholar 

  18. N. Megiddo. On the complexity of polyhedral separability. Discrete Combinatorial Geometry, 3: 325–337, 1988.

    Article  MathSciNet  MATH  Google Scholar 

  19. D. Pollard. Convergence of Stochastic Processes. Springer Verlag, 1984.

    Book  MATH  Google Scholar 

  20. F. Rosenblatt. Principles and Neurodynamics: Perceptrons and the Theory of Brain Mechanisms. Spartan Books, Washington, D.C., 1962.

    MATH  Google Scholar 

  21. H. U. Simon. On the number of examples and stages needed for learning decision trees. In M. A. Fulk and J. Case, editors, Proc. of the 3rd Annual Workshop on Computational Learning Theory, pages 303–314, Palo Alto, California, Aug. 1990. Morgan Kaufmann. Also to appear in IPL.

    Google Scholar 

  22. L. G. Valiant. A theory of the learnable. Communications of the ACM, 27 (11): 1134–1142, Nov. 1984.

    Article  MATH  Google Scholar 

  23. V. N. Vapnik. Estimation of Dependencies Based on Empirical Data. Springer Verlag, 1982.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1991 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Simon, H.U. (1991). Algorithmisches Lernen auf der Basis empirischer Daten. In: Brauer, W., Hernández, D. (eds) Verteilte Künstliche Intelligenz und kooperatives Arbeiten. Informatik-Fachberichte, vol 291. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-76980-1_44

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-76980-1_44

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-54617-7

  • Online ISBN: 978-3-642-76980-1

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics