Abstract
Knowledge graph (KG) completion aims to fill the missing facts in a KG, where a fact is represented as a triple in the form of (subject, relation, object). Current KG completion models compel two-thirds of a triple provided (e.g., subject and relation) to predict the remaining one. In this paper, we propose a new model, which uses a KG-specific multi-layer recurrent neutral network (RNN) to model triples in a KG as sequences. It outperformed several state-of-the-art KG completion models on the conventional entity prediction task for many evaluation metrics, based on two benchmark datasets and a more difficult dataset. Furthermore, our model is enabled by the sequential characteristic and thus capable of predicting the whole triples only given one entity. Our experiments demonstrated that our model achieved promising performance on this new triple prediction task.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. In: ICLR, San Diego, USA (2015)
Bollacker, K.D., Evans, C., Paritosh, P., Sturge, T., Taylor, J.: Freebase: a collaboratively created graph database for structuring human knowledge. In: SIGMOD, pp. 1247–1250. ACM, Vancouver (2008)
Bordes, A., Usunier, N., Garcia-Durán, A., Weston, J., Yakhnenko, O.: Translating embeddings for modeling multi-relational data. In: NIPS, pp. 2787–2795. Curran Associates, Inc., Lake Tahoe (2013)
Dettmers, T., Minervini, P., Stenetorp, P., Riedel, S.: Convolutional 2D knowledge graph embeddings. In: AAAI. AAAI, New Orleans (2018)
Hermans, M., Schrauwen, B.: Training and analysing deep recurrent neural networks. In: NIPS, pp. 190–198. Curran Associates, Inc., Lake Tahoe (2013)
Jean, S., Cho, K., Memisevic, R., Bengio, Y.: On using very large target vocabulary for neural machine translation. In: ACL, pp. 1–10. ACL, Beijing (2015)
Józefowicz, R., Vinyals, O., Schuster, M., Shazeer, N., Wu, Y.: Exploring the limits of language modeling. In: ICLR, San Juan, Puerto Rico (2016)
Keneshloo, Y., Shi, T., Ramakrishnan, N., Reddy, C.K.: Deep reinforcement learning for sequence to sequence models. CoRR abs/1805.09461 (2018)
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: ICLR, San Diego, USA (2015)
Lin, Y., Liu, Z., Luan, H.B., Sun, M., Rao, S., Liu, S.: Modeling relation paths for representation learning of knowledge bases. In: ACL, pp. 705–714. ACL, Beijing (2015)
Lin, Y., Liu, Z., Sun, M., Liu, Y., Zhu, X.: Learning entity and relation embeddings for knowledge graph completion. In: AAAI, pp. 2181–2187. AAAI, Austin (2015)
Miller, G.A.: WordNet: an electronic lexical database. Commun. ACM 38(11), 39–41 (1995)
Nickel, M., Tresp, V., Kriegel, H.P.: A three-way model for collective learning on multi-relational data. In: ICML, pp. 809–816. Omnipress, Bellevue (2011)
Toutanova, K., Chen, D.: Observed versus latent features for knowledge base and text inference. In: CVSC, pp. 57–66. ACL, Beijing (2015)
Trouillon, T., Welbl, J., Riedel, S., Gaussier, É., Bouchard, G.: Complex embeddings for simple link prediction. In: ICML, New York City, USA, pp. 2071–2080 (2016)
Vinyals, O., Kaiser, L., Koo, T., Petrov, S., Sutskever, I., Hinton, G.E.: Grammar as a foreign language. In: NIPS, pp. 2773–2781. Curran Associates, Inc., Montréal (2015)
Wang, Z., Zhang, J., Feng, J., Chen, Z.: Knowledge graph embedding by translating on hyperplanes. In: AAAI, pp. 1112–1119. AAAI, Québec City (2014)
Xiao, H., Huang, M., Meng, L., Zhu, X.: SSP: Semantic space projection for knowledge graph embedding with text descriptions. In: AAAI, pp. 3104–3110. AAAI, San Francisco (2017)
Xie, R., Liu, Z., Jia, J., Luan, H., Sun, M.: Representation learning of knowledge graphs with entity descriptions. In: AAAI, pp. 2659–2665. AAAI, Phoenix (2016)
Yang, B., Yih, W., He, X., Gao, J., Deng, L.: Embedding entities and relations for learning and inference in knowledge bases. In: ICLR, San Diego (2015)
Yang, F., Yang, Z., Cohen, W.W.: Differentiable learning of logical rules for knowledge base reasoning. In: NIPS, pp. 2316–2325. Curran Associates, Inc., Long Beach (2017)
Acknowledgements
This work was supported by the National Natural Science Foundation of China (Nos. 61872172 and 61772264).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Guo, L., Zhang, Q., Ge, W., Hu, W., Qu, Y. (2019). DSKG: A Deep Sequential Model for Knowledge Graph Completion. In: Zhao, J., Harmelen, F., Tang, J., Han, X., Wang, Q., Li, X. (eds) Knowledge Graph and Semantic Computing. Knowledge Computing and Language Understanding. CCKS 2018. Communications in Computer and Information Science, vol 957. Springer, Singapore. https://doi.org/10.1007/978-981-13-3146-6_6
Download citation
DOI: https://doi.org/10.1007/978-981-13-3146-6_6
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-13-3145-9
Online ISBN: 978-981-13-3146-6
eBook Packages: Computer ScienceComputer Science (R0)