Advertisement

Decision Tree Learner in the Presence of Domain Knowledge

  • João Vieira
  • Cláudia AntunesEmail author
Conference paper
  • 633 Downloads
Part of the Communications in Computer and Information Science book series (CCIS, volume 480)

Abstract

In the era of semantic web and big data, the need for machine learning algorithms able to exploit domain ontologies is undeniable. In the past, two divergent research lines were followed, but with background knowledge represented through domain ontologies, is now possible to develop new ontology-driven learning algorithms. In this paper, we propose a method that adds domain knowledge, represented in OWL 2, to a purely statistical decision tree learner. The new algorithm tries to find the best attributes to test in the decision tree, considering both existing attributes and new ones that can be inferred from the ontology. By exploring the set of axioms in the ontology, the algorithm is then able to determine in run-time the best level of abstraction for each attribute, infer new attributes and decide the ones to be used in the tree. Our experimental results show that our method produces smaller and more accurate trees even on data sets where all features are concrete, but specially on those where some features are only specified at higher levels of abstraction. We also show that our method performs substantially better than traditional decision tree classifiers in cases where only a small number of labeled instances are available.

Keywords

Semantic aspects of data mining Classification Decision trees Background knowledge Ontologies 

References

  1. 1.
    Altman, N.: An introduction to kernel and nearest-neighbor nonparametric regression. Am. Stat. 46(3), 175–185 (1992)MathSciNetGoogle Scholar
  2. 2.
    Antunes, C.: D2PM: domain driven pattern mining. Technical report, Project report, Technical Report 1530, IST, Lisboa (2011)Google Scholar
  3. 3.
    Bache, K., Lichman, M.: UCI machine learning repository (2013). http://archive.ics.uci.edu/ml
  4. 4.
    Blockeel, H., De Raedt, L.: Top-down induction of first-order logical decision trees. Artif. Intell. 101(1), 285–297 (1998)zbMATHCrossRefGoogle Scholar
  5. 5.
    Boser, B.E., Guyon, I.M., Vapnik, V.N.: A training algorithm for optimal margin classifiers. In: Proceedings of the Fifth Annual Workshop on Computational Learning Theory, pp. 144–152. ACM (1992)Google Scholar
  6. 6.
    Bramer, M.: Using J-pruning to reduce overfitting in classification trees. Knowl.-Based Syst. 15(5), 301–308 (2002)CrossRefGoogle Scholar
  7. 7.
    Domingos, P., Kok, S., Poon, H., Richardson, M., Singla, P.: Unifying logical and statistical AI. In: AAAI (2006)Google Scholar
  8. 8.
    Dzeroski, S., Jacobs, N., Molina, M., Moure, C., Muggleton, S., Laer, W.V.: Detecting traffic problems with ILP. In: Page, D.L. (ed.) ILP 1998. LNCS, vol. 1446, pp. 281–290. Springer, Heidelberg (1998)CrossRefGoogle Scholar
  9. 9.
    Hawkins, D.M.: The problem of overfitting. J. Chem. Inf. Comput. Sci. 44(1), 1–12 (2004)CrossRefGoogle Scholar
  10. 10.
    Kazakov, Y., Krtzsch, M., Simančík, F.: The incredible ELK. J. Autom. Reasoning 53(1), 1–61 (2014). http://dx.doi.org/10.1007/s10817-013-9296-3 Google Scholar
  11. 11.
    Lincoff, G., Nehring, C.: National Audubon Society Field Guide to North American Mushrooms. Knopf, New York (1997)Google Scholar
  12. 12.
    Maimon, O., Rokach, L. (eds.): Data Mining and Knowledge Discovery Handbook, 2nd edn. Springer, New York (2010)zbMATHGoogle Scholar
  13. 13.
    Motik, B., Patel-Schneider, P.F., Parsia, B., Bock, C., Fokoue, A., Haase, P., Hoekstra, R., Horrocks, I., Ruttenberg, A., Sattler, U., et al.: Owl 2 web ontology language: structural specification and functional-style syntax. W3C recommendation 27, 17 (2009)Google Scholar
  14. 14.
    Muggleton, S., De Raedt, L., Poole, D., Bratko, I., Flach, P., Inoue, K., Srinivasan, A.: ILP turns 20. Mach. Learn. 86(1), 3–23 (2012)zbMATHMathSciNetCrossRefGoogle Scholar
  15. 15.
    Núñez, M.: The use of background knowledge in decision tree induction. Mach. Learn. 6(3), 231–250 (1991)Google Scholar
  16. 16.
    Quinlan, J.R.: Induction of decision trees. Mach. Learn. 1(1), 81–106 (1986)Google Scholar
  17. 17.
    Quinlan, J.R., Cameron-Jones, R.M.: Foil: a midterm report. In: Brazdil, P.B. (ed.) ECML 1993. LNCS, vol. 667, pp. 1–20. Springer, Heidelberg (1993)Google Scholar
  18. 18.
    Quinlan, J.R.: C4.5: Programs for Machine Learning, vol. 1. Morgan Kaufmann, San Francisco (1993)Google Scholar
  19. 19.
    Roberts, S., Jacobs, N., Muggleton, S., Broughton, J., et al.: A comparison of ILP and propositional systems on propositional traffic data. In: Page, D.L. (ed.) ILP 1998. LNCS, vol. 1446, pp. 291–299. Springer, Heidelberg (1998)CrossRefGoogle Scholar
  20. 20.
    Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Cogn. Model. 1, 213 (2002)Google Scholar
  21. 21.
    Srinivasan, A., King, R.D., Muggleton, S.: The role of background knowledge: using a problem from chemistry to examine the performance of an ILP program. Trans. Knowl. Data Eng. (1999)Google Scholar
  22. 22.
    White, A.P., Liu, W.Z.: Technical note: Bias in information-based measures in decision tree induction. Mach. Learn. 15(3), 321–329 (1994)zbMATHGoogle Scholar
  23. 23.
    Zhang, J., Kang, D.K., Silvescu, A., Honavar, V.: Learning accurate and concise naïve bayes classifiers from attribute value taxonomies and data. Knowl. Inf. Syst. 9(2), 157–179 (2006)CrossRefGoogle Scholar
  24. 24.
    Zhang, J., Silvescu, A., Honavar, V.G.: Ontology-driven induction of decision trees at multiple levels of abstraction. In: Koenig, S., Holte, R. (eds.) SARA 2002. LNCS (LNAI), vol. 2371, pp. 316–323. Springer, Heidelberg (2002)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2014

Authors and Affiliations

  1. 1.Instituto Superior TécnicoUniversidade de LisboaLisbonPortugal

Personalised recommendations