Advertisement

Constructing Minimal Knowledge Bases by Machine Learning

  • O. Najmann
  • K. Eckstein
Conference paper

Abstract

Like databases, knowledge bases should not contain redundant information. Therefore, if knowledge is acquired, the size of the corresponding knowledge base should be kept to a minimum without loss of information. We analyze the problem of constructing minimal, i.e., non-redundant, knowledge bases using machine learning methods. Here, knowledge is represented as a decision tree. There exist a number of heuristics that control the process of learning such trees from examples. We introduce four new heuristics and analyze these and two existing ones regarding the size of the trees they produce. For this purpose, we made computer-simulated experiments to obtain average-case results. The results show that there are remarkable differences in the average performance ratio: the values range from 1.01 to 1.41; “random learning” produces overly large decision trees (65%–125% larger than necessary).

Keywords

Decision Tree Expert System Size Function Split Attribute Decision Tree Induction 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    Blumer A., Ehrenfeucht, A., Haussler, D., Warmuth, M. K. [ 1987 ], “Occam’s razor,” Information Processing Letters, 24, pp. 377–380MathSciNetMATHCrossRefGoogle Scholar
  2. [2]
    Breiman, L., Friedman, J. H., Olshen, R. A., Stone, C. J. [ 1984 ], Classification and Regression Trees, Belmont: Wadsworth International GroupMATHGoogle Scholar
  3. [3]
    Buchanan, B. G., Shortliffe, E. H. (Eds.) [ 1985 ], Rule-Based Expert Systems, Reading: Addison WesleyGoogle Scholar
  4. [4]
    Carbonell, J. G., Michalski, R. S., Mitchell, T. M. [ 1983 ], “An overview of machine learning,” Machine Learning, An Artificial Intelligence Approach, Vol. I, Carbonell, J. G., Michalski, R. S., Mitchell, T. M. (Eds.), Palo Alto: TiogaGoogle Scholar
  5. [5]
    Feigenbaum, E. A. [ 1977 ], “The art of artificial intelligence: themes and case studies of knowledge engineering,” Proceedings of the 5th International Joint Conference on Artificial Intelligence, Cambridge, pp. 1014–1029, Palo Alto: Morgan KaufmannGoogle Scholar
  6. [6]
    Feigenbaum, E. A. [ 1981 ], “Expert systems in the 1980s.,” State of the art report on machine intelligence, Bond, A. (Ed.), Maidenhead: Pergamon-InfotechGoogle Scholar
  7. [7]
    Garey, M. R., Johnson, D. S. [ 1979 ], Computers and Intractability, New York: FreemanMATHGoogle Scholar
  8. [8]
    Hart, A. [ 1986 ], Knowledge Acquisition for Expert Systems, London: Kogan PageGoogle Scholar
  9. [9]
    Hayes-Roth, F., Waterman, D. A., Lenat D. (Eds.) [ 1983 ], Building Expert Systems, London: Addison WesleyGoogle Scholar
  10. [10]
    Hunt, E. B., Marin, J., Stone, P. J. [ 1966 ], Experiments in Induction, New York: Academic PressGoogle Scholar
  11. [11]
    Hyafil, L., Rivest R. L. [ 1976 ], “Constructing optimal binary decision trees is NP-complete,” Information Processing Letters, Vol. 5, No. 1, pp. 15–17MathSciNetMATHCrossRefGoogle Scholar
  12. [12]
    Jackson, P. [ 1990 ], Introduction to Expert Systems, Wokingham: Addison WesleyGoogle Scholar
  13. [13]
    Knuth, D. E. [ 1973 ], The Art of Computer Programming. Vol. 1: Fundamental Algorithms, Reading: Addison WesleyGoogle Scholar
  14. [14]
    Michalski, R. S. [ 1975 ], “Variable-valued logic and its applications to pattern recognition and maçhine learning,” Computer Science and Multiple-Valued Logic Theory and Applications, Rine, D. C. (Ed.), Amsterdam: North-HollandGoogle Scholar
  15. [15]
    Michalski, R. S. [ 1987 ], “Learning strategies and automated knowledge acquisition: an overview,” Computational Models of Learning, Bolc, L. (Ed.), Berlin: Springer-VerlagGoogle Scholar
  16. [16]
    Mingers, J. [ 1989 ], “An empirical comparison of selection measures for decision-tree induction,” Machine Learning, 3, pp. 319–342Google Scholar
  17. [17]
    Norton, S. W. [ 1989 ], “Generating better decision trees,” Proceedings of the 11th International Joint Conference on Artificial Intelligence, Detroit, pp. 800–805, Palo Alto: Morgan KaufmannGoogle Scholar
  18. [18]
    Pagallo, G. [ 1989 ], “Learning DNF by decision trees,” Proceedings of the 11th International Joint Conference on Artificial Intelligence, Detroit, pp. 639–644, Palo Alto: Morgan KaufmannGoogle Scholar
  19. [19]
    Pagallo, G., Haussler D. [ 1990 ], “Boolean feature discovery in empirical learning,” Machine Learning, 5, pp. 71–99CrossRefGoogle Scholar
  20. [20]
    Quinlan, J. R. [ 1983 ], “Learning efficient classification procedures and their application to chess end game,” Machine Learning: An Artificial Intelligence Approach, Vol. I, Carbonell, J. G., Michalski, R. S., Mitchell, T. M. (Eds.), Palo Alto: TiogaGoogle Scholar
  21. [21]
    Quinlan, J. R. [ 1986 ], “Induction of decision trees,” Machine Learning, 1, pp. 81–106Google Scholar
  22. [22]
    Quinlan, J. R. [ 1987 ], “Generating production rules from decision trees,” Proceedings of the 10th International Joint Conference on Artificial Intelligence, Milan, pp. 304–307, Palo Alto: Morgan KaufmannGoogle Scholar
  23. [23]
    Snedecor, G. W., Cochran, W. G. [ 1967 ], Statistical Methods, Ames: The Iowa State University PressGoogle Scholar
  24. [24]
    Ullman, J. D. [1988], Principles of Database and Knowledge-base Systems, Vol. I,Computer Science PressGoogle Scholar
  25. [25]
    Witten, I. A., MacDonald, B. A. [ 1990 ], “Using concept learning for knowledge acquisition,” Knowledge-Based Systems, Vol. 3: Machine Learning and Uncertain Reasoning, Gaines, B., Bouse, J. (Eds.), London: Academic PressGoogle Scholar

Copyright information

© Springer-Verlag Wien 1991

Authors and Affiliations

  • O. Najmann
    • 1
  • K. Eckstein
    • 1
  1. 1.Praktische InformatikUniversität DuisburgDuisburg 1Germany

Personalised recommendations