Inductive learning

  • Wu Xindong
Regular Papers


Machine learning (ML) is a major subfield of artificial intelligence (AF). It has been seen as a feasible way of avoiding the knowledge bottleneck problem in knowledge based systems development. research on ML has concentrated in the main on inductive learning, a paradigm for inducing rules from unordered sets of exmaples. AQ11 and ID3, the two most widespread algorithms in ML, are both inductive. This paper first summarizes AQ11, ID3 and the newly-developed extension matrix approach based HCV algorithm, and then reviews the recent development of inductive learning and automatic knowledge acquisition from data bases.


Generalization specialization decision trees extension matrixes machine learning, data bases 


  1. [1]
    J. W. Bala, R. S. Michalski and J. Wnek, The Principal Axes Method for Constructive Induction Proceedings of the Ninth International Workshop on Machine Learning D. Sleeman and P. Edward (Ed s.), Morgan Kaufmann Publishers, 1992, pp. 20–29.Google Scholar
  2. [2]
    E. Bloedorn, R. S. Michalski and J. Wnek, AQ17-A Multistrategy Constructive Learning System. In Reports of Machine Learning and Inference Laboratory. Center for Artificial Intelligence, George Mason University, USA, 1992.Google Scholar
  3. [3]
    R. L. Blum, Discovery, confirmation, and incorporation of causal relationships from a large time-oriented clinical data base — The RX project.Computers and Biomedical Research, 1982, 15(2), pp. 164–187.CrossRefGoogle Scholar
  4. [4]
    J. H. Boose and B. R. Gaines (Eds.), Knowledge Acquisition Tools for Expert Systems. Academic Press, 1988.Google Scholar
  5. [5]
    R. Boswell, Manual for NewID, version 6.1, TI/P2154/RAB/4/2.5. The Turing Institute Glasgow, 1990.Google Scholar
  6. [6]
    L. Breiman, J. H. Friedman, R. A. Olshen and C. J. Stone, Classification and Regression Trees. Wadsworth, Belmont, California, 1984.MATHGoogle Scholar
  7. [7]
    M. L. Brodie, Future Intelligent Information Systems: The Combination of Artificial Intelligence and Data Base Technology Readings in Artificial Intelligence and Databases, 1988.Google Scholar
  8. [8]
    A. Bundy, B. Silver and D. Plummer, An analytical comparison of some rule-learning programs.Artificial Intelligence 1985, 27(2), pp. 137–181.MATHCrossRefGoogle Scholar
  9. [9]
    Y. Cai, N. Cercone and J. Han, Learning in relational databases: An attribute-oriented approach.Computational Intelligence, 1991, 7 (3), pp. 119–132.CrossRefGoogle Scholar
  10. [10]
    J. G. Carbonell (Ed.), Machine Learning: Paradigms and Methods. The MIT Press, 1990.Google Scholar
  11. [11]
    B. Cestnik, I. Kononenko and I. Bratko, ASSISTANT 86:A Knowledge-Elicitation Tool for Sophisticated Users. InProgress in Machine Learning, I. Bratko and N. Lavrac (Eds.) Sigma Press, Wilmslow, 1987.Google Scholar
  12. [12]
    P. Clark, Machine Learning: Techniques and Recent Developments. TIRM-90-041. The Turing Institute, Glasgow, 1990.Google Scholar
  13. [13]
    R. Clark and T. Niblett, The CN2 induction algorithm.Machine Learning, 1989, 3, pp. 261–283.Google Scholar
  14. [14]
    R. A. Corlett, Explaining Induced Decision Trees. Expert Systems 83, Churchill College, Cambridge, 14–16 December, 1983, pp. 136–142.Google Scholar
  15. [15]
    G. DeJong and R. Mooney, Explanation-based learning: An alternative view.Machine Learning, 1986, 1, pp. 145–176.Google Scholar
  16. [16]
    U. M. Fayyad and K. B. Irani, On the handling of continuous-valued attributes in decision tree gener ation.Machine Learning, 1992,8, pp. 87–102.MATHGoogle Scholar
  17. [17]
    E. A. Feigenbaum, Expert Systems in the 1980s. InInfotech State of the Art Report on Machine Intelligence. A. Bond (Ed.), Maidenhead: Pergamon-Infotech, 1981.Google Scholar
  18. [18]
    M. Gams, M. Drobnič and M. Petkovšek, Learning from examples — A uniform view.International Journal of Man-Machine Studies, 1991, 34, pp. 49–68.CrossRefGoogle Scholar
  19. [19]
    J. Hong, AE1: An extension matrix approximate method for the general covering problem.International Journal of Computer and Information Sciences 1985, 14(6), pp. 421–437.MATHCrossRefMathSciNetGoogle Scholar
  20. [20]
    J. Hong, A new attribute-based learning algorithm GS and a comparison with existing algorithms.Journal of Computer Science and Technology, 1989 4(3), pp. 218–228.MathSciNetCrossRefGoogle Scholar
  21. [21]
    J. Hong, Learning from examples and a multi-purpose learning system AE5.Chinese Journal of Computers, 1989, 12(2), pp. 98–125.Google Scholar
  22. [22]
    J. R. Hong, R. S. Michalski and C. Uhrik, An extension matrix approach to the general covering problem. InApplications of Artificial Intelligence I (Proceedings of SPIE 635). J. Gilmore (Ed.), Orlande, Florida, USA 1986.Google Scholar
  23. [23]
    J. R. Hong, I. Mozetic and R. S. Michalski, AQ15: Incremental Learning of Attribute-Based Descriptions from Examples. The Method and User's Guide.Report ISG 85-5, UIUCDCS-F-86-949, Dept. of Computer Science. Univ. of Illinois, Urbana, 1986.Google Scholar
  24. [24]
    J. R. Hong and C. Uhrik, The Extension Matrix Approach to Attribute Based Learning. InProgress in Machine Learning, I. Bratko and N. Lavrac (Eds.), Wimslow: Sigma Press, England, 1987.Google Scholar
  25. [25]
    E. B. Hunt, J. Marin and P. T. Stone, Experiments in Induction. Academic Press, New York, 1966.Google Scholar
  26. [26]
    M. Ke and M. Ali, A knowledge-directed induction methodology for intelligent database systems.International Journal of Expert Systems, 1991, 4(1), pp. 71–115.Google Scholar
  27. [27]
    P. Langley, Toward a unified science of machine learning.Machine-Learning, 1989, 3, pp. 253–259.Google Scholar
  28. [28]
    D. B. Lenat, On Automated Scientific Theory Formation: A Case Study Using the AM Program. InMachine Intelligence 9, J. Hayeset al. (Eds.), New York: Halstead, 1979.Google Scholar
  29. [29]
    D. B. Lenat, EURISKO: A program that learns new heuristics and domain concepts—The nature of heuristics III. program design and results.Artificial Intelligence, 1983, 21(1), pp. 61–98.CrossRefMathSciNetGoogle Scholar
  30. [30]
    S. Marcus (Ed.), Automating Knowledge Acquisition for Expert Systems. Kluwer Academic Publishers, 1988.Google Scholar
  31. [31]
    C. Matheus, A Constructive Induction Framework. Proceedings of the Sixth International Workshop on Machine Learning, California, Morgan Kauffmann, 1989.Google Scholar
  32. [32]
    C. Matheus and L. Rendell, Constructive Induction on Decision Trees. Proceedings of the Eleventh International Joint Conference on Artificial Intelligence, California: Morgan Kauffmann, 1989.Google Scholar
  33. [33]
    C. McDonald, Machine learning: A survey of current techniques.Artificial Intelligence Review, 1989, 3, pp. 243–280.CrossRefGoogle Scholar
  34. [34]
    P. Mehra, L. Rendell and B. Wah, Principled Constructive Induction. Proceedings of the Eleventh International Joint Conference on Artificial Intelligence, California: Morgan Kauffmann, 1989.Google Scholar
  35. [35]
    R. S. Michalski, Variable Valued Logic and Its Applications to Pattern Recognition and Machine Learning. InConputer Science and Multiple Valued Logic Theory and Applications, D. C. Rine (Ed.), Amsterdam: North-Holland, 1975, pp. 506–534.Google Scholar
  36. [36]
    R. S. Michalski, Understanding the Nature of Learning: Issues and Research Directions. InMachine Learning: An Artificial Intelligence Approach, Vol. 2, R. S. Michalski, J. G. Carbonell and R. M. Mitchell (Eds.), CA: Kaufmann, 1986, pp. 3–25.Google Scholar
  37. [37]
    R. S. Michalski and J. Larson, Selection of Most Representative Training Examples and Incremental Generation of VLI Hypothesis: the Underlying Methodology and Description of Programs ESEL and AQ11. Tech. Report UIUCDCS-R-78-867, Dept. of Computer Sceince, Univ. of Illinois at Champaign-Urbana, 1978.Google Scholar
  38. [38]
    R. S. Michalski and R. L. Chilausky, Learning by Being Told and Learning from Examples: An Experimental Comparison of Two Methods of Knowledge Acquisition in the Context of Developing an Expert System for Soybean Disease Diagnosis.International Journal of Policy Analysis and Information Systems, 1980, 4, pp. 125–161.Google Scholar
  39. [39]
    R. S. Michalski, I. Mozetic, J. Hong and N. Lavrac, The Multi-Purpose Incremental Learning System AQ15 and Its Testing Application to Three Medical Domains. Proceedings of AAAI 1986, 1986, pp. 1041–1045.Google Scholar
  40. [40]
    D. Michie, Current Developments in Expert Systems. InApplications of Expert Systems, J. R. Quinlan (Ed.), Reading: Addison-Wesley, 1987.Google Scholar
  41. [41]
    T. Mitchell, Version Spaces: A Candidate Elimination Approach to Rule Learning. Proceedings of the Fifth International Joint Conference on Artificial Intelligence, Cambridge, Mass., 1977.Google Scholar
  42. [42]
    T. Mitchell, R. Keller and S. Kedar-Cabelli, Explanation-Based Generalization: A Unifying View.Machine Learning, 1986, 1, 47–80.Google Scholar
  43. [43]
    P. Mowforth, Some Applications with Inductive Expert Systems Shells. TIOP-86-002, The Turing Institute, Glasgow, 1986.Google Scholar
  44. [44]
    S. Muggleton, A Strategy for Constructing New Predicates in First Order Logic. Proceedings of the European Working Session on Learning, London: Pitman, 1988.Google Scholar
  45. [45]
    S. Muggleton, Inductive Logic Programming. Proceedings of the Workshop on Algorithmic Learning Theory, Japanese Society for Artificial Intelligence, 1990.Google Scholar
  46. [46]
    M. Nunez, The use of background knowledge in decision tree induction.Machine Learning, 1991, 6, pp. 231–250.Google Scholar
  47. [47]
    P. O'Rorke, A Comparative Study of Inductive Learning Systems AQ11P and ID3 Using a Chess End-Game Test Problem, ISG 82-2, Computer Science Department, University of Illinois at Urbana-Champaign, 1982.Google Scholar
  48. [48]
    A. Paterson and T. Niblett, ACLS Manual, Version 1. The Turing Institute, Glasgow, 1982.Google Scholar
  49. [49]
    J. R. Quinlan, Discovering Rules by Induction from Large Collections of Examples. InIntroductory Readings in Expert System, D. Michie (Ed.), Gordon and Breach, London, 1979, pp. 33–46.Google Scholar
  50. [50]
    J. R. Quinlan, Learning from Noisy Data. InMachine Learning, Vol. 2 J. Carbonell and T. Mitchell (Eds.), Tioga, Palo Alto, USA, 1986.Google Scholar
  51. [51]
    J. R. Quinlan, Induction of Decision Trees.Machine Learning, 1986, 1, pp. 81–106.Google Scholar
  52. [52]
    J. R. Quinlan, Generating Production Rules from Decision Trees. Proceedings of International Joint Conference on Artificial Intelligence, J. McDermott (Ed.) Morgan Kaufmann Publishers, Inc., 1987, pp. 304–307.Google Scholar
  53. [53]
    J. R. Quinlan, Induction, Knowledge and Expert Systems. InArtificial Intelligence Developments and Applications, J. S. Gero and R. Stanton (Eds.), North-Holland: Elsevier Science Publishers B. V., 1988, pp. 253–271.Google Scholar
  54. [54]
    J. R. Quinlan, Decision Trees and Multi-Valued Attributes. InMachine Intelligence II: Logic and the Acquisition of Knowledge, J. E. Hayes, D. Michie and J. Richards (Eds.), Clarendon Press, Oxford, 1988, pp. 305–318.Google Scholar
  55. [55]
    J. R. Quinlan, Simplifying Decision Trees. InKnowledge Acquisitions for Knowledge Based Systems, B. Gaines and J. Boose (Eds.), Vol. 1, Academic Press, 1989.Google Scholar
  56. [56]
    J. R. Quinlan, Requirements for Knowledge Discovery in Databases. Proceedings of IJCAI-89 Workshop on Knowledge Discovery in Databases, Detroit, USA, 1989, xiv.Google Scholar
  57. [57]
    J. R. Quinlan, Knowledge acquisition from structured data: Using determinate literal to assist search.IEEE Expert, 1991, 6(6), pp. 32–37.CrossRefGoogle Scholar
  58. [58]
    J. R. Quinlan, K. A. Horn and L. Lazarus. Inductive Knowledge Acquisition: A Case Study. Proceedings of the Second Australian Conference on the Applications of Expert Systems, New South Wales Institute of Technology, Sydney, 1986, pp. 183–204.Google Scholar
  59. [59]
    J. R. Quinlan, P. J. Compton, K. A. Horn and L. Lazarus, Inductive Knowledge Acquisition: A Case Study. Applications of Expert Systems. Addison-Wesley, 1987.Google Scholar
  60. [60]
    J. C. Schlimmer and D. Fisher, A Case Study of Incremental Concept Induction. Proceedings of the Fifth National Conference on Artificial Intelligence, Philadelphia, Morgan Kaufmann, USA, 1986, pp. 496–501.Google Scholar
  61. [61]
    A. D. Shapiro, Structured Induction in Expert Systems. Turing Institute Press in Association with Addison-Wesley. Workingham, UK. 1987.MATHGoogle Scholar
  62. [62]
    C. J. Thornton, The Complexity of Constructive Learning. Dept. of Artificial Intelligence, Univ. of Edinburgh, 1991.Google Scholar
  63. [63]
    S. B. Thrunet al., The MONK's Problems-A Performance Comparison of Different Learning Algorithms. CMU-CS-91, 197, School of Computer Science, Carnegie Mellon University, 1991.Google Scholar
  64. [64]
    P. E. Utgoff, Incremental Induction of Decision Trees.Machine Learning, 1989, 4, pp. 161–186.CrossRefGoogle Scholar
  65. [65]
    L. G. Valiant, A Theory of the Learnable.Communications of the ACM, 1984, 27 (11), pp. 1134–1142.MATHCrossRefGoogle Scholar
  66. [66]
    P. Winston, Learning Structural Description from Examples. Ph.D. Thesis, Mass: MIT AI Lab., 1970.Google Scholar
  67. [67]
    J. Wirth and J. Catlett, Experiments on the Costs and Benefits of Windowing in ID3. Proceedings of the Fifth International Conference on Machine Learning, J. Laird (Ed.), Morgan Kauffmann Publishers Inc., 1988.Google Scholar
  68. [68]
    X. Wu, A Study on Intelligent Data Base Techniques. Proceedings of the First Chinese Joint Conference on Artificial Intelligence. Jilin, China, 1990, pp. 23–30.Google Scholar
  69. [69]
    X. Wu, Optimization Problems in Extension Matrixes.Science in China, Series A, Chinese edition: 1992, 35(2), pp. 200–207; English edition, 1992, 35(3), pp. 363–373.MathSciNetGoogle Scholar
  70. [70]
    X. Wu, HCV: A Heuristic Covering Algorithm for Extension Matrix Approach. DAI Research Paper No. 578, Department of Artificial Intelligence, University of Edinburgh, 1992.Google Scholar
  71. [71]
    X. Wu, HCV User's Manual (Release 1.0 June 1992). DAI Technical Paper No. 9, Department of Artificial Integlligence, University of Edinburgh, 1992.Google Scholar
  72. [72]
    X. Wu, KE shell 2: An Intelligent Learning Data Base System. InResearch and Development in Expert Systems 9, M. A. Bramer (Ed.), Cambridge University Press, 1992.Google Scholar

Copyright information

© Science Press, Beijing China and Allerton Press Inc. 1993

Authors and Affiliations

  • Wu Xindong
    • 1
  1. 1.Department of Artificial IntellgenceUniversity of EdinburghEdinburghUK

Personalised recommendations