Skip to main content
Log in

Toward effective knoledge acquisition with first-order logic induction

  • Published:
Journal of Computer Science and Technology Aims and scope Submit manuscript

Abstract

Knowledge acquisition with machine learning techniques is a fundamental requirement for knowledge discovery from databases and data mining systems. Two techniques in particular — inductive learning and theory revision — have been used toward this end. A method that combines both approaches to effectively acquire theories (regularity) from a set of training examples is presented. Inductive learning is used to acquire new regularity from the training examples; and theory revision is used to improve an initial theory. In addition, a theory preference criterion that is a combination of the MDL-based heuristic and the Laplace estimate has been successfully employed in the selection of the promising theory. The resulting algorithm developed by integrating inductive learning and theory revision and using the criterion has the ability to deal with complex problems, obtaining useful theories in terms of its predictive accuracy.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  1. Quinlan J R. Learning logical definitions from relations.Machine Learning, 1990, 5(3): 239–266.

    Google Scholar 

  2. Muggleton S H, Feng C. Efficient induction of logic programs. InInductive Logic Programming, Muggleton S H (ed.), Academic Press, London, 1992, pp.281–297.

    Google Scholar 

  3. Džeroski S, Lavrač N. Refinement graphs for FOIL and LINUS. InInductive Logic Programming, Muggleton S H (ed.), Academic Press, London, 1992.

    Google Scholar 

  4. Zhang X, Numao M. Efficient multiple predicate learner based on fast failure mechanism. InPRICAI’96: Topics in Artificial Intelligence, pp.35–46. LNAI 1114, Springer Verlag, 1996.

  5. Pazzani M, Kibler D. The utility of knowledge in inductive learning.Machine Learning, 1992, 9(1): 57–94.

    Google Scholar 

  6. Pazzani M J, Brunk C A, Silverstein G. An information-based approach to integrating empirical and explanation-based learning. InInductive Logic Programming, Muggleton S (ed.), Academic Press, London, 1992.

    Google Scholar 

  7. Tangkitvanich S, Shimura M. Learning from an approximate theory and noisy examples. InAAAI 93, pp.466–471. AAAI press/The MIT Press, 1993.

  8. Zhang X, Numao M. Concept formation in noisy domains. Technical Report TR96-0022, Dept. of Computer Science, Tokyo Institute of Technology, Tokyo, Japan, 1996.

    Google Scholar 

  9. De Raedt L, Lavrač N, Džeroski S. Multiple predicate learning. InProc. Thirteenth International Joint Conference on Artificial Intelligence, San Mateo, CA, Morgan Kaufmann, 1993, pp.1037–1042.

  10. Rajamoney S A. A computational approach to theory revision. InComputational Models of Science Discovery and Theory Formation, Jeff Shrager, Pat Langley (eds.), Morgan Kaufmann, 1990.

  11. Quinlan J R, Cameron-Jones R M. Induction of logic programs: FOIL and related systems.New Generation Computing, 1995, 13: 287–312.

    Article  Google Scholar 

  12. Zhang X, Numao M. Learning and revising theories in noisy domains. InProc. 8th International Workshop of Algorithmic Learning Theory, LNAI 1316, Springer Verlag, 1997, pp.339–351.

  13. Quinlan J R. The minimum description length principle and categorical theories. InProc. 11th International Conference on Machine Learning, Morgan Kaufmann, 1994, pp.233–241.

  14. Cestnik B, Bratko I. On estimating probabilities in tree pruning. InProc. Fifth European Working Session on Learning, Kodratoff Y (ed.), Berlin, Springer, 1991, pp.151–163.

    Google Scholar 

  15. Lavrač N, Džeroski S. Inductive Logic Programming Techniques and Applications. Ellis Horwood, 1994.

  16. Ali K, Pazzani M J. Error reduction through learning multiple description.Machine Learning, 1996, 24: 173–202.

    Google Scholar 

  17. Karalič A, Bratko I. First order regression.Machine Learning, 1997, 26: 147–176.

    Article  MATH  Google Scholar 

  18. Kijsirikul B, Numao M, Shimura M. Efficient learning of logic programs with non-determinate non-discriminating literals. InProc. Eighth International Workshop on Machine Learning, San Mateo, CA, Morgan Kaufmann, 1991, pp.417–421.

  19. Angluin D, Laird P. Learning from noisy examples.Machine Learning, 1988, 2: 343–370.

    Google Scholar 

  20. Oka X, Terano T, Numao Met al. Multistrategy learning, emergent computation, and information integration. InProc. the Tenth Annual Conference of JSAI, Tokyo, Japan, 1996, pp.67–70.

  21. Hirahara M, Oka N, Yoshida K. Automatic placement using static and dynamic grouping. InThe 23th Intelligent System Symposium, Japan, 1996.

  22. Richards B L, Mooney R J. Automated refinement of first-order Horn-clause domain theories.Machine Learning, 1995, 19: 95–131.

    Google Scholar 

  23. Esposito F, Malerba D, Semeraro G. Multistrategy learning for document recognition.Applied Artificial Intelligence, 1994, 8: 33–84.

    Article  Google Scholar 

  24. Zhang X, Numao M. An efficient approach for discovering knowledge from multiple databases.Chinese Journal of Advanced Software Research, 1998, 5(1): 60–70.

    Google Scholar 

  25. Dolšak B, Bratko I, Jezernik A. Finite element mesh design: An engineering domain for ILP application. InProc. the Fourth International Workshop on Inductive Logic Programming, Germany, 1994, pp.305–320.

  26. Chowdhury M R, Numao M. Automated bias shift in a constrained space for logic program synthesis.Journal of Japanese Society for Artificial Intelligence, 2001, 16: 548–556.

    Article  Google Scholar 

  27. Holte R C, Acker L E, Porter B W. Concept learning and accuracy of small disjuncts. InProc. 11th International Conference on Artificial Intelligence, Morgan Kaufmann, Detroit, 1989, pp.813–818.

  28. Shapiro E. Algorithmic program debugging. MIT Press, 1983.

  29. Michalski R S, Mozetic I, Hong J, Lavrač N. The multipurpose incremental learning system AQ15 and its testing application to three medical domains. InProceedings of the Fifth National Conference on Artificial Intelligence, Morgan Kaufmann, 1986.

  30. Quinlan J R, Cameron-Jones R M. Foil: A midterm report. InProc. European Conference on Machine Learning, Springer Verlag, 1993, pp.3–20.

  31. Muggleton S H, Buntine W. Machine invention of first-order predicates by inverting resolution. InProc. Fifth International Conference on Machine Learning, San Mateo, CA, Morgan Kaufmann, 1988, pp.339–352.

  32. Plotkin G D. A note on inductive generalization. InMachine Intelligence 5, Meltzer B, Michie D (eds.), Edinburgh University Press, Edinburgh, 1970, pp.153–163.

    Google Scholar 

  33. Plotkin G D. A further note on inductive generalization. InMachine Intelligence 6, Meltzer B, Michie D (eds.), Edinburgh University Press, Edinburgh, 1971, pp.101–124.

    Google Scholar 

  34. S-H Nienhuys-Cheng, R de Wolf. Foundations of Inductive Logic Programming, Springer Verlag, 1997.

  35. Muggleton S. Inverse entailment and Progol.New Generation Computing, 1995, 13: 145–186.

    Google Scholar 

  36. Srinivasan A, Muggleton S H, Sternberg M J E, King R D. Theories for mutagenicity: A study in first order and feature-based induction.Artificial Intelligence, 1996, 85: 277–299.

    Article  Google Scholar 

  37. Martin L, Vrain C. MULT_ICN: An empirical multiple predicate learner. InProc. the Fifth International Workshop on Inductive Logic Programming, Heverlee, Belgium, 1995, pp.129–144.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhang Xiaolong.

Additional information

ZHANG Xiaolong received his B.S. and M.S. degrees in computer sciences from Northeastern University, China in 1985 and 1988 respectively. He received his Ph.D from Dept. of Computer Science, Tokyo Institute of Technology in 1998. He has been an associate professor at Wuhan University of Science and Technology. He was with IBM Japan as an IT Specialist in consulting CRM and Business Intelligence projects, and developing data warehousing and data mining solutions for industries from 1998 to 2001. He is now an IT manager in IT Solution Department of AXA Life Insurance Company, Japan. His research interests include machine learning, knowledge discovery from database, data mining and data warehouse, natural language processing and intelligent software. He is a member of Japanese Society for Artifical Intelligence.

Masayuki Numao Masayuki Numao is an associate professor at Dept. of Computer Science, Tokyo Institute of Technology. He received his B.S. degree from Dept. of Electrical Engineering in 1982 and his Ph.D. from Dept. of Computer Science in 1987, Tokyo Institute and Technology. He was a visiting scholar at CSLI, Stanford University from 1989 to 1990. His research interests include artificial intelligence. global intelligence and machine learning. Numao is a member of Information Processing Society of Japan, Japanese Society for Artificial Intelligence, Japanese Cognitive Science Society, Japanese Society for Software Science and Technology, AAAI and AIUEO.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Zhang, X., Numao, M. Toward effective knoledge acquisition with first-order logic induction. J. Comput. Sci. & Technol. 17, 565–577 (2002). https://doi.org/10.1007/BF02948825

Download citation

  • Received:

  • Revised:

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF02948825

Keywords

Navigation