Skip to main content

Attributauswahlmaße für die Induktion von Entscheidungsbäumen: Ein Überblick

  • Chapter
Data Mining

Part of the book series: Beiträge zur Wirtschaftsinformatik ((WIRTSCH.INFORM.,volume 27))

Zusammenfassung

Die Induktion von Entscheidungsbäumen mit Hilfe eines Top-Down-Verfahrens ist eine bekannte und weit verbreitete Technik zur Bestimmung von Klassifikatoren. Der Erfolg dieser Methode hängt stark von dem Auswahlmaß ab, mit dem beim Aufbau des Entscheidungsbaums das nächste zu testende Attribut bestimmt wird. In diesem Aufsatz geben wir einen Überblick über eine Reihe von Auswahlmaßen, die in der Vergangenheit für die Induktion von Entscheidungsbäumen vorgeschlagen wurden. Wir erläutern die den Maßen zugrundeliegenden Ideen und vergleichen die betrachteten Maße anhand experimenteller Ergebnisse.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 44.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 59.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Literatur

  1. P.W. Baim. A Method for Attribute Selection in Inductive Learning Systems. IEEE Trans. on Pattern Analysis and Machine Intelligence, PAMI-10:888–896, 1988

    Article  Google Scholar 

  2. C. Borgelt, J. Gebhardt und R. Kruse. Concepts for Probabilistic and Possibilistic Induction of Decision Trees on Real World Data. Proc. of the EUFIT’96, Vol. 3:1556–1560, 1996

    Google Scholar 

  3. L. Breiman, J.H. Friedman, R.A. Olshen und C.J. Stone. Classification and Regression Trees, Wadsworth International, Belmont, CA, 1984

    Google Scholar 

  4. C.K. Chow und C.N. Liu. Approximating Discrete Probability Distributions with Dependence Trees. IEEE Trans. on Information Theory 14(3):462–467, IEEE 1968

    Article  Google Scholar 

  5. G.F. Cooper und E. Herskovits. A Bayesian Method for the Induction of Probabilistic Networks from Data. Machine Learning 9:309–347, Kluwer 1992

    Google Scholar 

  6. Z. Daróczy. Generalized Information Functions. Information and Control 16:36–51, 1970

    Article  Google Scholar 

  7. J. Gebhardt und R. Kruse. A Possibilistic Interpretation of Fuzzy Sets in the Context Model. Proc. IEEE Int. Conf. on Fuzzy Systems, 1089–1096, San Diego 1992.

    Google Scholar 

  8. J. Gebhardt und R. Kruse. Learning Possibilistic Networks from Data. Proc. 5th Int. Workshop on AI and Statistics, 233–244, Fort Lauderdale, 1995

    Google Scholar 

  9. J. Gebhardt und R. Kruse. Tightest Hypertree Decompositions of Multivariate Possibility Distributions. Proc. Int. Conf. on Information Processing and Management of Uncertainty in Knowledge-based Systems, 1996

    Google Scholar 

  10. R.V.L. Hartley. Transmission of Information. The Bell Systems Technical Journal 7:535–563, 1928

    Google Scholar 

  11. D. Heckerman, D. Geiger und D.M. Chickering. Learning Bayesian Networks: The Combination of Knowledge and Statistical Data. Machine Learning 20:197–243, Kluwer 1995

    Google Scholar 

  12. M. Higashi und G. J. Klir. Measures of Uncertainty and Information based on Possibility Distributions. Int. Journal of General Systems 9:43–58, 1982

    Article  Google Scholar 

  13. D.A. Huffman. A Method for the Construction of Minimum Redundancy Codes. Proc. IRE 40, No. 10, 1098–1101, 1952

    Article  Google Scholar 

  14. G.J. Klir und M. Mariano. On the Uniqueness of a Possibility Measure of Uncertainty and Information. Fuzzy Sets and Systems 24:141–160, 1987

    Article  Google Scholar 

  15. I. Kononenko. On Biases in Estimating Multi-Valued Attributes. Proc. 1st Int. Conf. on Knowledge Discovery and Data Mining, 1034–1040, Montreal, 1995

    Google Scholar 

  16. R.E. Krichevsky und V.K. Trofimov. The Performance of Universal Coding. IEEE Trans. on Information Theory, 27(2):199–207, 1983

    Article  Google Scholar 

  17. R. Kruse, E. Schwecke und J. Heinsohn. Uncertainty and Vagueness in Knowledge-based Systems: Numerical Methods. Springer, Berlin 1991

    Book  Google Scholar 

  18. R. Kruse, J. Gebhardt und F. Klawonn. Foundations of Fuzzy Systems, John Wiley & Sons, Chichester, England 1994

    Google Scholar 

  19. S. Kullback und R.A. Leibler. On Information and Sufficiency. Ann. Math. Statistics 22:79–86, 1951

    Article  Google Scholar 

  20. R. Lopez de Mantaras. A Distance-based Attribute Selection Measure for Decision Tree Induction. Machine Learning 6:81–92, Kluwer 1991

    Article  Google Scholar 

  21. P.M. Murphy und D. Aha, UCI Repository of Machine Learning Databases, ftp://ics.uci.edu/pub/machine-learning-databases, 1994

  22. H.T. Nguyen. Using Random Sets. Information Science 34:265–274, 1984

    Article  Google Scholar 

  23. J.R. Quinlan. Induction of Decision Trees. Machine Learning 1:81–106, 1986

    Google Scholar 

  24. J.R. Quinlan. C4.5: Programs for Machine Learning, Morgan Kaufman, 1993

    Google Scholar 

  25. J. Rissanen. A Universal Prior for Integers and Estimation by Minimum Description Length. Annals of Statistics 11:416–431, 1983

    Article  Google Scholar 

  26. J. Rissanen. Stochastic Complexity and Its Applications. Proc. Workshop on Model Uncertainty and Model Robustness, Bath, England, 1995

    Google Scholar 

  27. C.E. Shannon. The Mathematical Theory of Communication. The Bell Systems Technical Journal 27:379–423, 1948

    Google Scholar 

  28. L. Wehenkel. On Uncertainty Measures Used for Decision Tree Induction. Proc. IPMU, 1996

    Google Scholar 

  29. X. Zhou und T.S. Dillon. A statistical-heuristic Feature Selection Criterion for Decision Tree Induction. IEEE Trans. on Pattern Analysis and Machine Intelligence, PAMI-13:834–841, 1991

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1998 Physica-Verlag Heidelberg

About this chapter

Cite this chapter

Borgelt, C., Kruse, R. (1998). Attributauswahlmaße für die Induktion von Entscheidungsbäumen: Ein Überblick. In: Nakhaeizadeh, G. (eds) Data Mining. Beiträge zur Wirtschaftsinformatik, vol 27. Physica-Verlag HD. https://doi.org/10.1007/978-3-642-86094-2_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-86094-2_4

  • Publisher Name: Physica-Verlag HD

  • Print ISBN: 978-3-7908-1053-0

  • Online ISBN: 978-3-642-86094-2

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics