Induction of Decision Trees Based on the Rough Set Theory

  • Tu Bao Ho
  • Trong Dung Nguyen
  • Masayuki Kimura
Conference paper
Part of the Studies in Classification, Data Analysis, and Knowledge Organization book series (STUDIES CLASS)


This paper aimed at two following objectives. One was the introduction of a new measure (R-measure) of dependency between groups of attributes in a data set, inspired by the notion of dependency of attribute in the rough set theory. The second was the application of this measure to the problem of attribute selection in decision tree induction, and an experimental comparative evaluation of decision tree systems using R-measure and other different attribute selection measures most of them are widely used in machine learning: gain-ratio, gini-index, d N distance, relevance, x 2.


Cross Validation Attribute Selection Selection Measure Pruning Technique Experimental Comparative Study 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. Bairn, P.W. (1988): A method for attribute selection in inductive learning systems. IEEE Trans. on PAMI, 10, 888–896.CrossRefGoogle Scholar
  2. Breiuran, L., Friedman, J., Olshen, R., Stone, C. (1984): Classification and Regression Trees, Belmont, CA: Wadsworth.Google Scholar
  3. Buntine, W., Niblett, T. (1991): A further comparison of splitting rules for decision-tree induction. Machine Learning, 8, 75–85Google Scholar
  4. Dougherty, J., Kohavi, R. and Sahami, M. (1995): Supervised and Unsupervised Discretization of Continuous Features. Proceedings 12th International Conference on Machine Learning, Morgan Kaufmann, 194–202.Google Scholar
  5. Ho, T.B., Nguyen, T.D. (1997): An interactive-graphic system for decision tree induction (under review).Google Scholar
  6. Kononenko, I. (1995): On biases in estimating multi-valued attributes. Proc. 14th Inter. Joint. Conf. on Artificial Intelligence, Montreal, Morgan Kaufmann, 1034–1040.Google Scholar
  7. Kohavi, R (1995): A study of cross-validation and bootstrap for accuracy estimation and model selection. Proc. Int. Joint Conf. on Artificial Intelligence IJCAI’95, 1137–1143.Google Scholar
  8. Liu, W.Z., White, A.P. (1994): The importance of attribute selection measures in decision tree induction. Machine Learning, 15, 25–41.Google Scholar
  9. López de Mantaras, R. (1991): A distance-based attribute selection measure for decision tree induction. Machine Learning, 6, 81–92.Google Scholar
  10. Mingers, J. (1989): An empirical comparison of selection measures for decision-tree induction. Machine Learning, 3, 319–342.Google Scholar
  11. Pawlak, Z. (1991): Rough Sets: Theoretical Aspects of Reasoning About Data,Kluwer Academic Publishers.Google Scholar
  12. Pawlak, Z., Grzymala-Busse, J., Slowinski, R., Ziarko, W. (1995): Rough sets. Communications of the ACM, 38, 89–95.CrossRefGoogle Scholar
  13. Quinlan, J. R. (1993): C4.5: Programs for Machine Learning,Morgan Kaufmann.Google Scholar
  14. Wille, R. (1992): Concept lattice and conceptual knowledge systems. Computers and Mathematics with Applications, 23, 493–515.Google Scholar

Copyright information

© Springer Japan 1998

Authors and Affiliations

  • Tu Bao Ho
    • 1
  • Trong Dung Nguyen
    • 1
  • Masayuki Kimura
    • 1
  1. 1.Japan Advanced Institute of Science and TechnologyHokuriku Tatsunokuchi, IshikawaJapan

Personalised recommendations