Advertisement

Context-Guided Self-supervised Relation Embeddings

  • Huda HakamiEmail author
  • Danushka Bollegala
Conference paper
  • 10 Downloads
Part of the Communications in Computer and Information Science book series (CCIS, volume 1215)

Abstract

A semantic relation between two given words a and b can be represented using two complementary sources of information: (a) the semantic representations of a and b (expressed as word embeddings) and, (b) the contextual information obtained from the co-occurrence contexts of the two words (expressed in the form of lexico-syntactic patterns). Pattern-based approach suffers from sparsity while methods rely only on word embeddings for the related pairs lack of relational information. Prior works on relation embeddings have pre-dominantly focused on either one type of those two resources exclusively, except for a notable few exceptions. In this paper, we proposed a self-supervised context-guided Relation Embedding method (CGRE) using the two sources of information. We evaluate the learnt method to create relation representations for word-pairs that do not co-occur. Experimental results on SemEval-2012 task2 dataset show that the proposed operator outperforms other methods in representing relations for unobserved word-pairs.

Keywords

Relation embeddings Relational patterns Compositional approach Learning relation representations 

References

  1. 1.
    Bollegala, D.T., Matsuo, Y., Ishizuka, M.: Relational duality: Unsupervised extraction of semantic relations between entities on the web. In: Proceedings of the 19th International Conference on World Wide Web, pp. 151–160. ACM (2010)Google Scholar
  2. 2.
    Cafarella, M.J., Banko, M., Etzioni, O.: Relational web search. In: WWW Conference (2006)Google Scholar
  3. 3.
    Nakov, P.: Improved statistical machine translation using monolingual paraphrases. In: Proceedings of ECAI 2008: 18th European Conference on Artificial Intelligence, 21–25 July 2008, Patras, Greece: Including Prestigious Applications of Intelligent Systems (PAIS 2008), vol. 178, p. 338. IOS Press (2008)Google Scholar
  4. 4.
    Yang, S., Zou, L., Wang, Z., Yan, J., Wen, J.-R.: Efficiently answering technical questions-a knowledge graph approach, pp. 3111–3118. In: AAAI (2017)Google Scholar
  5. 5.
    Joshi, M., Choi, E., Levy, O., Weld, D.S., Zettlemoyer, L.: pair2vec: compositional word-pair embeddings for cross-sentence inference. arXiv preprint arXiv:1810.08854 (2018)
  6. 6.
    Hakami, H., Hayashi, K., Bollegala, D.: Why does pairdiff work? - A mathematical analysis of bilinear relational compositional operators for analogy detection. In: Proceedings of the 27th International Conference on Computational Linguistics (COLING) (2018)Google Scholar
  7. 7.
    Hakami, H., Bollegala, D.: Learning relation representations from word representations (2018)Google Scholar
  8. 8.
    Mikolov, T., Yih, W.-T., Zweig, G.: Linguistic regularities in continuous space word representations. In: Proceedings of HLT-NAACL, pp. 746–751 (2013)Google Scholar
  9. 9.
    Washio, K., Kato, T.: Filling missing paths: modeling co-occurrences of word pairs and dependency paths for recognizing lexical semantic relations. arXiv preprint arXiv:1809.03411 (2018)
  10. 10.
    Turney, P.D.: Measuring semantic similarity by latent relational analysis. arXiv preprint cs/0508053 (2005)Google Scholar
  11. 11.
    Snow, R., Jurafsky, D., Ng, A.Y.: Learning syntactic patterns for automatic hypernym discovery. In: Advances in Neural Information Processing Systems, pp. 1297–1304 (2005)Google Scholar
  12. 12.
    Riedel, S., Yao, L., McCallum, A., Marlin, B.M.: Relation extraction with matrix factorization and universal schemas. In: Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 74–84 (2013)Google Scholar
  13. 13.
    Hearst, M.A.: Automatic acquisition of hyponyms from large text corpora. In: Proceedings of the 14th Conference on Computational Linguistics, vol. 2, pp. 539–545. Association for Computational Linguistics (1992)Google Scholar
  14. 14.
    Girju, R., Badulescu, A., Moldovan, D.: Learning semantic constraints for the automatic discovery of part-whole relations. In: Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology, vol. 1, pp. 1–8. Association for Computational Linguistics (2003)Google Scholar
  15. 15.
    Marshman, E.: The cause-effect relation in a biopharmaceutical corpus: English knowledge patterns. In: Terminology and Knowledge Engineering, pp. 89–94 (2002)Google Scholar
  16. 16.
    Bigham, J., Littman, M.L., Shnayder, V., Turney, P.D.: Combining independent modules to solve multiple-choice synonym and analogy problems. In: Proceedings of the International Conference on Recent Advances in Natural Language Processing, pp. 482–489 (2003)Google Scholar
  17. 17.
    Jameel, S., Bouraoui, Z., Schockaert, S.: Unsupervised learning of distributional relation vectors. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), vol. 1, pp. 23–33 (2018)Google Scholar
  18. 18.
    Pennington, J., Socher, R., Manning, C.: Glove: global vectors for word representation. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014)Google Scholar
  19. 19.
    Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013)
  20. 20.
    Hakami, H., Bollegala, D.: Compositional approaches for representing relations between words: a comparative study. Knowl.-Based Syst. 136, 172–182 (2017)CrossRefGoogle Scholar
  21. 21.
    Gábor, K., Zargayouna, H., Tellier, I., Buscaldi, D., Charnois, T.: Exploring vector spaces for semantic relations. In: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pp. 1814–1823 (2017)Google Scholar
  22. 22.
    Levy, O., Remus, S., Biemann, C., Dagan, I.: Do supervised distributional methods really learn lexical inference relations? In: Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 970–976 (2015)Google Scholar
  23. 23.
    Linzen, T.: Issues in evaluating semantic spaces using word analogies. arXiv preprint arXiv:1606.07736 (2016)
  24. 24.
    Rogers, A., Drozd, A., Li, B.: The (too many) problems of analogical reasoning with word vectors. In: Proceedings of the 6th Joint Conference on Lexical and Computational Semantics (* SEM 2017), pp. 135–148 (2017)Google Scholar
  25. 25.
    Roller, S., Kiela, D., Nickel, M.: Hearst patterns revisited: automatic hypernym detection from large text corpora. arXiv preprint arXiv:1806.03191 (2018)
  26. 26.
    Vylomova, E., Rimell, L., Cohn, T., Baldwin, T.: Take and took, gaggle and goose, book and read: evaluating the utility of vector differences for lexical relation learning. arXiv preprint arXiv:1509.01692 (2015)
  27. 27.
    Zhila, A., Yih, W.-T., Meek, C., Zweig, G., Mikolov, T.: Combining heterogeneous models for measuring relational similarity. In: Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 1000–1009 (2013)Google Scholar
  28. 28.
    Washio, K., Kato, T.: Neural latent relational analysis to capture lexical semantic relations in a vector space. arXiv preprint arXiv:1809.03401 (2018)
  29. 29.
    Gladkova, A., Drozd, A., Matsuoka, S.: Analogy-based detection of morphological and semantic relations with word embeddings: what works and what doesn’t. In: Proceedings of the NAACL Student Research Workshop, pp. 8–15 (2016)Google Scholar
  30. 30.
    Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)CrossRefGoogle Scholar
  31. 31.
    Patel, P., Davey, D., Panchal, V., Pathak, P.: Annotation of a large clinical entity corpus. In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 2033–2042 (2018)Google Scholar
  32. 32.
    Rosenberg, A., Hirschberg, J.: V-measure: a conditional entropy-based external cluster evaluation measure. In: Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL) (2007)Google Scholar
  33. 33.
    Turney, P.D.: The latent relation mapping engine: algorithm and experiments. J. Artif. Intell. Res. 33, 615–655 (2008)CrossRefGoogle Scholar
  34. 34.
    Duchi, J., Hazan, E., Singer, Y.: Adaptive subgradient methods for online learning and stochastic optimization. J. Mach. Learn. Res. 12(Jul), 2121–2159 (2011)MathSciNetzbMATHGoogle Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2020

Authors and Affiliations

  1. 1.Department of Computer ScienceUniversity of LiverpoolLiverpoolUK
  2. 2.Department of Computer ScienceTaif UniveristyTaifSaudi Arabia

Personalised recommendations