Abstract
It is not uncommon for large organisations and corporations to routinely produce various kinds of reports indefinitely. Apart from archiving them and the occasional retrieval of some, very little can be done to take advantage of these massive resources for valuable knowledge discovery. The under-utilised unstructured data written in natural language text is often referred to as part of the “dark data”. The good news is, recent success of learning distributed representation of words in vector spaces, especially, the similarity and analogy queries enabled by the so-learned word vectors drive a paradigm shift from “document retrieval” to “knowledge retrieval”. In this paper, we investigated how representational learning of words can affect the entity query results from a large domain corpus of geological survey reports. Extensive similarity tests and analogy queries have been performed. It demonstrated the necessity of training domain-specific word embeddings, as pre-trained embeddings are good at capturing morphological relations, but are inadequate for domain specific semantic relations. Carrying out entity extractions prior to word embedding training will further improve the quality of analogy query results. The framework developed in this paper can also be readily applied to other domain specific corpus.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
- 2.
- 3.
- 4.
- 5.
- 6.
- 7.
- 8.
- 9.
- 10.
- 11.
References
Mikolov, T., Sutskever, I., Chen, K., Corrado, G.S., Dean, J.: Distributed representations of words and phrases and their compositionality. In: Advances in Neural Information Processing Systems, pp. 3111–3119 (2013)
Miller, G.A.: WordNet: a lexical database for English. Commun. ACM 38(11), 39–41 (1995)
Kusner, M., Sun, Y., Kolkin, N., Weinberger, K.: From word embeddings to document distances. In: International Conference on Machine Learning, pp. 957–966 (2015)
Lai, S., Xu, L., Liu, K., Zhao, J.: Recurrent convolutional neural networks for text classification. In: AAAI, vol. 333, pp. 2267–2273 (2015)
Lample, G., Ballesteros, M., Subramanian, S., Kawakami, K., Dyer, C.: Neural architectures for named entity recognition. arXiv preprint arXiv:1603.01360 (2016)
Iacobacci, I., Pilehvar, M.T., Navigli, R.: Embeddings for word sense disambiguation: an evaluation study. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), vol. 1, pp. 897–907 (2016)
Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781 (2013)
Pennington, J., Socher, R., Manning, C.: GloVe: global vectors for word representation. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014)
Bojanowski, P., Grave, E., Joulin, A., Mikolov, T.: Enriching word vectors with subword information. arXiv preprint arXiv:1607.04606 (2016)
Mikolov, T., Grave, E., Bojanowski, P., Puhrsch, C., Joulin, A.: Advances in pre-training distributed word representations. arXiv preprint arXiv:1712.09405 (2017)
Mikolov, T., Dean, J., Le, Q., Strohmann, T., Baecchi, C.: Learning representations of text using neural networks. In: NIPS Deep Learning Workshop, pp. 1–31 (2013)
Google archive: Word2vec (2013). https://code.google.com/archive/p/word2vec/. Accessed 01 March 2018
Mikolov, T., Yih, W.T., Zweig, G.: Linguistic regularities in continuous space word representations. In: Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 746–751 (2013)
Li, B., et al.: Investigating different syntactic context types and context representations for learning word embeddings. In: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pp. 2421–2431 (2017)
Rong, X.: Word2vec parameter learning explained. arXiv preprint arXiv:1411.2738 (2014)
Gladkova, A., Drozd, A., Matsuoka, S.: Analogy-based detection of morphological and semantic relations with word embeddings: what works and what doesn’t. In: Proceedings of the NAACL Student Research Workshop, pp. 8–15 (2016)
Drozd, A., Gladkova, A., Matsuoka, S.: Word embeddings, analogies, and machine learning: beyond king-man + woman = queen. In: Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pp. 3519–3530 (2016)
Levy, O., Goldberg, Y.: Linguistic regularities in sparse and explicit word representations. In: Proceedings of the Eighteenth Conference on Computational Natural Language Learning, pp. 171–180 (2014)
Turney, P.D.: Similarity of semantic relations. Comput. Linguist. 32(3), 379–416 (2006)
Turney, P.D.: Domain and function: a dual-space model of semantic relations and compositions. J. Artif. Intell. Res. 44, 533–585 (2012)
Bird, S., Klein, E., Loper, E.: Natural Language Processing with Python: Analyzing Text with the Natural Language Toolkit. O’Reilly Media Inc., Newton (2009)
van der Maaten, L., Hinton, G.: Visualizing data using t-SNE. J. Mach. Learn. Res. 9(Nov), 2579–2605 (2008)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer Nature Switzerland AG
About this paper
Cite this paper
Enkhsaikhan, M., Liu, W., Holden, EJ., Duuring, P. (2018). Towards Geological Knowledge Discovery Using Vector-Based Semantic Similarity. In: Gan, G., Li, B., Li, X., Wang, S. (eds) Advanced Data Mining and Applications. ADMA 2018. Lecture Notes in Computer Science(), vol 11323. Springer, Cham. https://doi.org/10.1007/978-3-030-05090-0_20
Download citation
DOI: https://doi.org/10.1007/978-3-030-05090-0_20
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-05089-4
Online ISBN: 978-3-030-05090-0
eBook Packages: Computer ScienceComputer Science (R0)