DSKG: A Deep Sequential Model for Knowledge Graph Completion

  • Lingbing Guo
  • Qingheng Zhang
  • Weiyi Ge
  • Wei HuEmail author
  • Yuzhong Qu
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 957)


Knowledge graph (KG) completion aims to fill the missing facts in a KG, where a fact is represented as a triple in the form of (subjectrelationobject). Current KG completion models compel two-thirds of a triple provided (e.g., subject and relation) to predict the remaining one. In this paper, we propose a new model, which uses a KG-specific multi-layer recurrent neutral network (RNN) to model triples in a KG as sequences. It outperformed several state-of-the-art KG completion models on the conventional entity prediction task for many evaluation metrics, based on two benchmark datasets and a more difficult dataset. Furthermore, our model is enabled by the sequential characteristic and thus capable of predicting the whole triples only given one entity. Our experiments demonstrated that our model achieved promising performance on this new triple prediction task.


Knowledge graph completion Deep Sequential model Recurrent neutral network 



This work was supported by the National Natural Science Foundation of China (Nos. 61872172 and 61772264).


  1. 1.
    Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. In: ICLR, San Diego, USA (2015)Google Scholar
  2. 2.
    Bollacker, K.D., Evans, C., Paritosh, P., Sturge, T., Taylor, J.: Freebase: a collaboratively created graph database for structuring human knowledge. In: SIGMOD, pp. 1247–1250. ACM, Vancouver (2008)Google Scholar
  3. 3.
    Bordes, A., Usunier, N., Garcia-Durán, A., Weston, J., Yakhnenko, O.: Translating embeddings for modeling multi-relational data. In: NIPS, pp. 2787–2795. Curran Associates, Inc., Lake Tahoe (2013)Google Scholar
  4. 4.
    Dettmers, T., Minervini, P., Stenetorp, P., Riedel, S.: Convolutional 2D knowledge graph embeddings. In: AAAI. AAAI, New Orleans (2018)Google Scholar
  5. 5.
    Hermans, M., Schrauwen, B.: Training and analysing deep recurrent neural networks. In: NIPS, pp. 190–198. Curran Associates, Inc., Lake Tahoe (2013)Google Scholar
  6. 6.
    Jean, S., Cho, K., Memisevic, R., Bengio, Y.: On using very large target vocabulary for neural machine translation. In: ACL, pp. 1–10. ACL, Beijing (2015)Google Scholar
  7. 7.
    Józefowicz, R., Vinyals, O., Schuster, M., Shazeer, N., Wu, Y.: Exploring the limits of language modeling. In: ICLR, San Juan, Puerto Rico (2016)Google Scholar
  8. 8.
    Keneshloo, Y., Shi, T., Ramakrishnan, N., Reddy, C.K.: Deep reinforcement learning for sequence to sequence models. CoRR abs/1805.09461 (2018)Google Scholar
  9. 9.
    Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: ICLR, San Diego, USA (2015)Google Scholar
  10. 10.
    Lin, Y., Liu, Z., Luan, H.B., Sun, M., Rao, S., Liu, S.: Modeling relation paths for representation learning of knowledge bases. In: ACL, pp. 705–714. ACL, Beijing (2015)Google Scholar
  11. 11.
    Lin, Y., Liu, Z., Sun, M., Liu, Y., Zhu, X.: Learning entity and relation embeddings for knowledge graph completion. In: AAAI, pp. 2181–2187. AAAI, Austin (2015)Google Scholar
  12. 12.
    Miller, G.A.: WordNet: an electronic lexical database. Commun. ACM 38(11), 39–41 (1995)CrossRefGoogle Scholar
  13. 13.
    Nickel, M., Tresp, V., Kriegel, H.P.: A three-way model for collective learning on multi-relational data. In: ICML, pp. 809–816. Omnipress, Bellevue (2011)Google Scholar
  14. 14.
    Toutanova, K., Chen, D.: Observed versus latent features for knowledge base and text inference. In: CVSC, pp. 57–66. ACL, Beijing (2015)Google Scholar
  15. 15.
    Trouillon, T., Welbl, J., Riedel, S., Gaussier, É., Bouchard, G.: Complex embeddings for simple link prediction. In: ICML, New York City, USA, pp. 2071–2080 (2016)Google Scholar
  16. 16.
    Vinyals, O., Kaiser, L., Koo, T., Petrov, S., Sutskever, I., Hinton, G.E.: Grammar as a foreign language. In: NIPS, pp. 2773–2781. Curran Associates, Inc., Montréal (2015)Google Scholar
  17. 17.
    Wang, Z., Zhang, J., Feng, J., Chen, Z.: Knowledge graph embedding by translating on hyperplanes. In: AAAI, pp. 1112–1119. AAAI, Québec City (2014)Google Scholar
  18. 18.
    Xiao, H., Huang, M., Meng, L., Zhu, X.: SSP: Semantic space projection for knowledge graph embedding with text descriptions. In: AAAI, pp. 3104–3110. AAAI, San Francisco (2017)Google Scholar
  19. 19.
    Xie, R., Liu, Z., Jia, J., Luan, H., Sun, M.: Representation learning of knowledge graphs with entity descriptions. In: AAAI, pp. 2659–2665. AAAI, Phoenix (2016)Google Scholar
  20. 20.
    Yang, B., Yih, W., He, X., Gao, J., Deng, L.: Embedding entities and relations for learning and inference in knowledge bases. In: ICLR, San Diego (2015)Google Scholar
  21. 21.
    Yang, F., Yang, Z., Cohen, W.W.: Differentiable learning of logical rules for knowledge base reasoning. In: NIPS, pp. 2316–2325. Curran Associates, Inc., Long Beach (2017)Google Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2019

Authors and Affiliations

  • Lingbing Guo
    • 1
    • 2
  • Qingheng Zhang
    • 1
  • Weiyi Ge
    • 2
  • Wei Hu
    • 1
    Email author
  • Yuzhong Qu
    • 1
  1. 1.State Key Laboratory for Novel Software TechnologyNanjing UniversityNanjingChina
  2. 2.Science and Technology on Information Systems Engineering LabNanjingChina

Personalised recommendations