Abstract
Boosting, a methodology for constructing and combining multiple classifiers, has been found to lead to substantial improvements in predictive accuracy. Although boosting was formulated in a propositional learning context, the same ideas can be applied to first-order learning (also known as inductive logic programming). Boosting is used here with a system that learns relational definitions of functions. Results show that the magnitude of the improvement, the additional computational cost, and the occasional negative impact of boosting all resemble the corresponding observations for propositional learning.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
M. Bain. Learning Logical Exceptions in Chess. PhD Thesis. University of Strathclyde, Glasgow, 1994
S. Bell and S. Weber. On the close logical relationship between FOIL and the frameworks of Helft and Plotkin. In Proceedings of the Third International Workshop on Inductive Logic Programming, pages 127–147. Jozef Stefan Institute, Ljubljana, Slovenia, 1993.
F. Bergadano and D. Gunetti. An interactive system to learn functional logic programs. In Proceedings of the Thirteenth International Joint Conference on Artificial Intelligence, pages 1044–1049. Morgan Kaufmann, San Mateo, CA, 1993.
L. Breiman. Bagging predictors. Machine Learning, 24: 123–140,1996.
L. Breiman. Bias, variance, and arcing classifiers. Technical Report 460. Statistics Department, University of California, Berkeley, CA, 1996.
W. Buntine. A Theory of Learning Classification Rules. PhD Thesis. University of Technology, Sydney, Australia, 1990.
R. M. Cameron-Jones and J. R. Quinlan. Avoiding pitfalls when learning recursive theories. Proceedings of the Thirteenth International Joint Conference on Artificial Intelligence, pages 1050–1057. Morgan Kaufmann, San Mateo, CA, 1993.
W. W. Cohen. Recovering software specifications with inductive logic programming. In Proceedings of the Twelfth National Conference on Artificial Intelligence, pages 142–148. AAAI Press, Menlo Park, CA, 1994.
W. W. Cohen. Text categorization and relational learning. In Proceedings of the Twelfth International Conference on Machine Learning, pages 124–132. Morgan Kaufmann, San Francisco, CA, 1995.
T. G. Dietterich. Machine learning research: four current directions. AI Magazine, 18: 97–136, 1997.
T. G. Dietterich, M. J. Kearns, and Y. Mansour. Applying the weak learning framework to understand and improve C4.5. In Proceedings of the Thirteenth International Conference on Machine Learning, pages 96–104. Morgan Kaufmann, San Francisco, CA, 1996.
B. Dolšak and S. Muggleton. The application of inductive logic programming to finite element mesh design. In S. Muggleton, editor, Inductive Logic Programming, pages 453–472. Academic Press, London, 1992.
F. Esposito, D. Malerba, G. Semeraro, and M. Pazzani. A machine learning approach to document understanding. In Proceedings of the Second International Workshop on Multistrategy Learning, pages 276–292. George Mason University, Fairfax, VA, 1993.
C. Feng. Inducing temporal fault diagnostic rules from a qualitative model. In S. Muggleton, editor, Inductive Logic Programming, pages 473–493. Academic Press, London, 1992.
Y. Freund and R. E. Schapire. A decision-theoretic generalization of online learning and an application to boosting. Journal of Computer and System Sciences, 55: 119–139, 1997.
V. Klingspor, K. Morik, and A. Rieger. Learning concepts from sensor data of a mobile robot. Machine Learning, 23: 305–332, 1996.
N. Lavraž, I. Weber, D. Zupaniž, D. Kazakov, O. Stepankova, and S. Dzeroski. ILPNET repositories on WWW: inductive logic programming systems, datasets and bibliography. Artificial Intelligence Communications, 9: 1–50, 1996.
C. X. Ling. Learning the past tense of English verbs: the symbolic pattern associator versus connectionist models. Journal of Artificial Intelligence Research, 1: 209–229, 1994.
N. Littlestone and M. K. Warmuth. The weighted majority algorithm. Information and Computation, 108: 212–261, 1994.
R. S. Michalski and G. Tecuci, editors. Machine Learning: A Multistrategy Approach. Morgan Kaufmann, San Francisco, CA, 1994.
R. J. Mooney and M. E. Califf. Induction of first-order decision lists: results on learning the past tense of English verbs. Journal of Artificial Intelligence Research, 3: 1–24, 1995.
S. Muggleton. Inverse entailment and Progol. New Generation Computing, 13: 245–286, 1995.
S. Muggleton and C. Feng. Efficient induction of logic programs. In S. Muggleton, editor, Inductive Logic Programming, pages 281–298. Academic Press, London, 1992.
S. Muggleton, R. D. King, and M. J. Sternberg. Protein secondary structure prediction using logic-based machine learning. Protein Engineering, 5:6 46–657, 1992.
M. Pazzani and D. Kibler. The utility of knowledge in inductive learning. Machine Learning, 9: 57–94, 1992.
J. R. Quinlan. Discovering rules by induction from large collections of examples. In D. Michie, editor, Expert Systems in the Micro Electronic Age. Edinburgh University Press, Edinburgh, 1979.
J. R. Quinlan. Learning logical definitions from relations. Machine Learning, 5: 239–266, 1990.
J. R. Quinlan. C4–5: Programs for Machine Learning. Morgan Kaufmann, San Mateo, CA, 1993.
J. R. Quinlan. Past tenses of verbs and first-order learning. In Proceedings of the Seventh Australian Joint Conference on Artificial Intelligence, pages 13–20. World Scientific, Singapore, 1994.
J. R. Quinlan. Boosting, bagging, and C4.5. In Proceedings of the Fourteenth National Conference on Artificial Intelligence, pages 725–730. AAAI Press, Menlo Park, CA, 1996.
J. R. Quinlan. Learning first-order definitions of functions. Journal of Artificial Intelligence Research, 5: 139–161, 1996.
J. R. Quinlan and R. M. Cameron-Jones. Induction of logic programs: FOIL and related systems. New Generation Computing, 13: 287–312, 1995.
C. Rouveirol. Flattening and saturation: two representation changes for generalization. Machine Learning, 14: 219–232, 1994.
R. E. Schapire. The strength of weak learnability. Machine Learning, 5: 197–227, 1990.
A. Srinivasan, S. Muggleton, M. J. E. Sternberg, and R. D. King. Theories for mutagenicity: A study in first-order and feature-based induction. Artificial Intelligence, 85: 277–299, 1996.
J. M. Zelle and R. J. Mooney. Combining FOIL and EBG to speed-up logic programs. In Proceedings of the Thirteenth International Joint Conference on Artificial Intelligence, pages 1106–1111. Morgan Kaufmann, San Mateo, CA, 1993.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2001 Springer-Verlag Berlin Heidelberg
About this chapter
Cite this chapter
Quinlan, R. (2001). Relational Learning and Boosting. In: Džeroski, S., Lavrač, N. (eds) Relational Data Mining. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-04599-2_12
Download citation
DOI: https://doi.org/10.1007/978-3-662-04599-2_12
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-07604-6
Online ISBN: 978-3-662-04599-2
eBook Packages: Springer Book Archive