Skip to main content

An Investigation into the Role of Domain-Knowledge on the Use of Embeddings

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 10759))

Abstract

Computing similarity in high-dimensional vector spaces is a long-standing problem that has recently seen significant progress with the invention of the word2vec algorithm. Usually, it has been found that using an embedded representation results in much better performance for the task being addressed. It is not known whether embeddings can similarly improve performance with data of the kind considered by Inductive Logic Programming (ILP), in which data apparently dissimilar on the surface, can be similar to each other given domain (background) knowledge. In this paper, using several ILP classification benchmarks, we investigate if embedded representations are similarly helpful for problems where there is sufficient amounts of background knowledge. We use tasks for which we have domain expertise about the relevance of background knowledge available and consider two subsets of background predicates (“sufficient” and “insufficient”). For each subset, we obtain a baseline representation consisting of Boolean-valued relational features. Next, a vector embedding specifically designed for classification is obtained. Finally, we examine the predictive performance of widely-used classification methods with and without the embedded representation. With sufficient background knowledge we find no statistical evidence for an improved performance with an embedded representation. With insufficient background knowledge, our results provide empirical evidence that for the specific case of using deep networks, an embedded representation could be useful.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    ILP practitioners will recognise this as a “propositionalisation” step. In fact, as will become apparent, our approach for obtaining features is simpler than most propositionalisation methods.

  2. 2.

    It is useful to clarify the choice of hypotheses. The null hypothesis \(H_0\) holds if there is no evidence for the utility of an embedded representation. This choice is motivated by the fact that obtaining an class-sensitive embedded representation requires substantial computational effort. Thus, the experiment is set up to be conservative: an embedded representation will only be obtained if there is statistical evidence to support it.

  3. 3.

    Information of this nature is already available in the ILP literature for Mutagenesis and Carcinogenesis from [24]. The information on relevance in that paper was provided by Professor R.D. King. We thank him extending this to the other problems here.

  4. 4.

    Comparing the results in Figs. 3 and 7 suggests that this will perform better than using an ILP approach, even with optimised parameters.

References

  1. Aggarwal, C.C., Hinneburg, A., Keim, D.A.: On the surprising behavior of distance metrics in high dimensional space. In: Van den Bussche, J., Vianu, V. (eds.) ICDT 2001. LNCS, vol. 1973, pp. 420–434. Springer, Heidelberg (2001). https://doi.org/10.1007/3-540-44503-X_27

    Chapter  Google Scholar 

  2. Bordes, A., Usunier, N., Garcia-Duran, A., Weston, J., Yakhnenko, O.: Translating embeddings for modeling multi-relational data. In: Burges, C.J.C., Bottou, L., Welling, M., Ghahramani, Z., Weinberger, K.Q. (eds.) Advances in Neural Information Processing Systems 26, pp. 2787–2795. Curran Associates Inc, Red Hook (2013)

    Google Scholar 

  3. Chen, T., Guestrin, C.: XGBoost: a scalable tree boosting system. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 785–794. ACM (2016)

    Google Scholar 

  4. Faruquie, T.A., Srinivasan, A., King, R.D.: Topic models with relational features for drug design. In: Riguzzi, F., Železný, F. (eds.) ILP 2012. LNCS (LNAI), vol. 7842, pp. 45–57. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-38812-5_4

    Chapter  Google Scholar 

  5. França, M.V.M., Zaverucha, G., d’Avila Garcez, A.S.: Fast relational learning using bottom clause propositionalization with artificial neural networks. Mach. Learn. 94(1), 81–104 (2014)

    Article  MathSciNet  Google Scholar 

  6. Goodfellow, I.J., Bengio, Y., Courville, A.C.: Deep Learning. Adaptive Computation and Machine Learning. MIT Press, Cambridge (2016)

    MATH  Google Scholar 

  7. Joshi, S., Ramakrishnan, G., Srinivasan, A.: Feature construction using theory-guided sampling and randomised search. In: Železný, F., Lavrač, N. (eds.) ILP 2008. LNCS (LNAI), vol. 5194, pp. 140–157. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-85928-4_14

    Chapter  Google Scholar 

  8. King, R.D., Muggleton, S.H., Srinivasan, A., Sternberg, M.J.: Structure-activity relationships derived by machine learning: the use of atoms and their bond connectivities to predict mutagenicity by inductive logic programming. Proc. Natl. Acad. Sci. U.S.A. 93(1), 438–442 (1996)

    Article  Google Scholar 

  9. King, R.D., Srinivasan, A.: Prediction of rodent carcinogenicity bioassays from molecular structure using inductive logic programming. Environ. Health Perspect. 104, 1031–1040 (1996)

    Article  Google Scholar 

  10. Koch, G.: Siamese neural networks for one-shot image recognition (2015)

    Google Scholar 

  11. Lavrač, N., Džeroski, S., Grobelnik, M.: Learning nonrecursive definitions of relations with linus. In: Kodratoff, Y. (ed.) EWSL 1991. LNCS, vol. 482, pp. 265–281. Springer, Heidelberg (1991). https://doi.org/10.1007/BFb0017020

    Chapter  Google Scholar 

  12. Lin, Y., Liu, Z., Sun, M., Liu, Y., Zhu, X.: Learning entity and relation embeddings for knowledge graph completion. In: AAAI (2015)

    Google Scholar 

  13. Lodhi, H.: Deep relational machines. In: Lee, M., Hirose, A., Hou, Z.-G., Kil, R.M. (eds.) ICONIP 2013. LNCS, vol. 8227, pp. 212–219. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-42042-9_27

    Chapter  Google Scholar 

  14. Marshall, J.B.: The sign test with ties included. Appl. Math. 5, 1594–1597 (2014)

    Article  Google Scholar 

  15. Mikolov, T., Sutskever, I., Chen, K., Corrado, G.S., Dean, J.: Distributed representations of words and phrases and their compositionality. In: Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems 2013. Proceedings of a Meeting Held 5–8 December 2013, Lake Tahoe, Nevada, United States, pp. 3111–3119 (2013)

    Google Scholar 

  16. Muggleton, S.H., Santos, J.C.A., Tamaddoni-Nezhad, A.: TopLog: ILP using a logic program declarative bias. In: de la Garcia, M., Pontelli, E. (eds.) ICLP 2008. LNCS, vol. 5366, pp. 687–692. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-89982-2_58

    Chapter  Google Scholar 

  17. Muggleton, S.: Inverse entailment and progol. New Gener. Comput. 13(3&4), 245–286 (1995)

    Article  Google Scholar 

  18. Ramakrishnan, G., Joshi, S., Balakrishnan, S., Srinivasan, A.: Using ILP to construct features for information extraction from semi-structured text. In: Blockeel, H., Ramon, J., Shavlik, J., Tadepalli, P. (eds.) ILP 2007. LNCS (LNAI), vol. 4894, pp. 211–224. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-78469-2_22

    Chapter  Google Scholar 

  19. Saha, A., Srinivasan, A., Ramakrishnan, G.: What kinds of relational features are useful for statistical learning? In: Riguzzi, F., Železný, F. (eds.) ILP 2012. LNCS (LNAI), vol. 7842, pp. 209–224. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-38812-5_15

    Chapter  Google Scholar 

  20. Specia, L., Srinivasan, A., Joshi, S., Ramakrishnan, G., das Graças Volpe Nunes, M.: An investigation into feature construction to assist word sense disambiguation. Mach. Learn. 76(1), 109–136 (2009)

    Article  Google Scholar 

  21. Srinivasan, A.: The Aleph Manual (1999). http://www.comlab.ox.ac.uk/oucl/research/areas/machlearn/Aleph/

  22. Srinivasan, A., Muggleton, S.H., Sternberg, M.J.E., King, R.D.: Theories for mutagenicity: a study in first-order and feature-based induction. Artif. Intell. 85(1–2), 277–299 (1996)

    Article  Google Scholar 

  23. Srinivasan, A., King, R.D.: Feature construction with inductive logic programming: a study of quantitative predictions of biological activity by structural attributes. In: Muggleton, S. (ed.) ILP 1996. LNCS, vol. 1314, pp. 89–104. Springer, Heidelberg (1997). https://doi.org/10.1007/3-540-63494-0_50

    Chapter  Google Scholar 

  24. Srinivasan, A., King, R.D., Bain, M.: An empirical study of the use of relevance information in inductive logic programming. J. Mach. Learn. Res. 4, 369–383 (2003)

    MathSciNet  MATH  Google Scholar 

  25. Srinivasan, A., Ramakrishnan, G.: Parameter screening and optimisation for ILP using designed experiments. J. Mach. Learn. Res. 12, 627–662 (2011)

    MATH  Google Scholar 

  26. Wang, Z., Zhang, J., Feng, J., Chen, Z.: Knowledge graph embedding by translating on hyperplanes. In: AAAI (2014)

    Google Scholar 

Download references

Acknowledgements

A.S. is a Visiting Professor in the Department of Computer Science, University of Oxford; and Visiting Professorial Fellow, School of CSE, UNSW Sydney. A.S. is supported by the SERB grant EMR/2016/002766.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lovekesh Vig .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG, part of Springer Nature

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Vig, L., Srinivasan, A., Bain, M., Verma, A. (2018). An Investigation into the Role of Domain-Knowledge on the Use of Embeddings. In: Lachiche, N., Vrain, C. (eds) Inductive Logic Programming. ILP 2017. Lecture Notes in Computer Science(), vol 10759. Springer, Cham. https://doi.org/10.1007/978-3-319-78090-0_12

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-78090-0_12

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-78089-4

  • Online ISBN: 978-3-319-78090-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics