Skip to main content

Deductive and Analogical Reasoning on a Semantically Embedded Knowledge Graph

  • Conference paper
  • First Online:
Artificial General Intelligence (AGI 2017)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 10414))

Included in the following conference series:

Abstract

Representing knowledge as high-dimensional vectors in a continuous semantic vector space can help overcome the brittleness and incompleteness of traditional knowledge bases. We present a method for performing deductive reasoning directly in such a vector space, combining analogy, association, and deduction in a straightforward way at each step in a chain of reasoning, drawing on knowledge from diverse sources and ontologies.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 44.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 59.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    In some special cases, the error in one gap of the chain will largely cancel out with the error at another gap. When this happens, the system has found an analogous relation. This is discussed in the section Analogical Properties of Semantic Spaces below.

  2. 2.

    Boole and DeMorgan originally formulated propositional logic as a special case of the logic of subsets [5].

  3. 3.

    If \(\varvec{a}\) and \(\varvec{b}\) are approximately orthogonal unit vectors, then the similarity between the two will be \(\frac{\sqrt{2}}{2}\). This is much higher than the expected similarity between any two terms selected from the space. See [20] for details.

  4. 4.

    Notice that addition is used as AND rather than OR when combining B with A and \(A\Rightarrow B\) (see the caption of Table 1 for why this is acceptable). At any rate, the notion of cancelling out with modus ponens still holds.

  5. 5.

    When a direct chain of reasoning is possible, such links won’t happen– the analogy, being inexact, has a higher cost than the direct link.

  6. 6.

    Along the same lines, [22] describes a more intricate method of locating particular word senses in the vector space.

  7. 7.

    Deductive reasoning systems typically use either forwards or backwards inference. This system uses “middle out” inference, that doesn’t begin at either end but is a holistic procedure happening all along the chain at once.

  8. 8.

    Notice that the fourth, less relevant, fact is also relating a food to a color.

  9. 9.

    In fact, they may form a multistranded rope rather than a chain– the “elastic-net” [23] parameter in LASSO can be used to encourage or discourage finding alternative equally good paths for part or all of the chain.

  10. 10.

    A slightly more complicated cost function can be used to encourage the lowest cost path to follow analogical connections as well.

References

  1. Baroni, M., Zamparelli, R.: Nouns are vectors, adjectives are matrices: representing adjective-noun constructions in semantic space. In: 2010 Conference on Empirical Methods in Natural Language Processing, pp. 1183–1193. ACL, October 2010

    Google Scholar 

  2. Bordes, A., Usunier, N., Garcia-Duran, A., Weston, J., Yakhnenko, O.: Translating embeddings for modeling multi-relational data. In: Advances in Neural Information Processing Systems, pp. 2787–2795 (2013)

    Google Scholar 

  3. Buchanan, B.G., Shortliffe, E.H.: Rule-Based Expert Systems: The MYCIN Experiments. Addison-Wesley, Reading (1984)

    Google Scholar 

  4. Dumais, S.T., Furnas, G.W., Landauer, T.K., Deerwester, S., Harshman, R.: Using latent semantic analysis to improve access to textual information. In: SIGCHI Conference, pp. 281–285. ACM, May 1988

    Google Scholar 

  5. Ellerman, D.: The logic of partitions: introduction to the dual of the logic of subsets. Rev. Symbolic Logic 3(2), 287–350 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  6. Freitas, A., Curry, E.: Natural language queries over heterogeneous linked data graphs: a distributional-compositional semantics approach. In: 19th International Conference on Intelligent User Interfaces. ACM (2014)

    Google Scholar 

  7. Gayler, R.: Vector symbolic architectures answer Jackendoff’s challenges for cognitive neuroscience. In: Slezak, P. (ed.) ICCS/ASCS International Conference on Cognitive Science, pp. 133–138. University of New South Wales, CogPrints, Sydney (2003)

    Google Scholar 

  8. Grefenstette, E., Sadrzadeh, M.: Experimental support for a categorical compositional distributional model of meaning. In: Conference on Empirical Methods in Natural Language Processing, pp. 1394–1404. ACL, July 2011

    Google Scholar 

  9. Kanerva, P.: Sparse Distributed Memory. MIT press, Cambridge (1988)

    Google Scholar 

  10. Kiros, R., Zhu, Y., Salakhutdinov, R.R., Zemel, R., Urtasun, R., Torralba, A., Fidler, S.: Skip-thought vectors. In: NIPS, pp. 3294–3302 (2015)

    Google Scholar 

  11. Knowlton, B., Morrison, R., Hummel, J., Holyoak, K.: A neurocomputational system for relational reasoning. Trends Cogn. Sci. 16(7), 373–381 (2012)

    Article  Google Scholar 

  12. Lee, M., He, X., Yih, W.T., Gao, J., Deng, L., Smolensky, P.: Reasoning in vector space: an exploratory study of question answering. arXiv:1511.06426 (2015)

  13. Levy, S.D.: Distributed representation of compositional structure. In: Rabual, J.R., Dorado, J., Pazos, A. (eds.) Encyclopedia of Artificial Intelligence. IGI Publishing, Hershey (2008)

    Google Scholar 

  14. Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., Dean, J.: Distributed representations of words and phrases and their compositionality. In: Advances in Neural Information Processing Systems, pp. 3111–3119 (2013)

    Google Scholar 

  15. Rocktäschel, T., Riedel, S.: Learning knowledge base inference with neural theorem provers. In: AKBC, pp. 45–50 (2016)

    Google Scholar 

  16. Summers-Stay, D., Voss, C., Cassidy, T.: Using a distributional semantic vector space with a knowledge base for reasoning in uncertain conditions. Biologically Inspired Cogn. Architectures 16, 34–44 (2016)

    Article  Google Scholar 

  17. Turney, P.D.: Measuring semantic similarity by latent relational analysis. arXiv preprint cs/0508053 (2005)

    Google Scholar 

  18. Wang, H., Onishi, T., Gimpel, K., McAllester, D.: Emergent logical structure in vector representations of neural readers. arXiv preprint arXiv:1611.07954 (2016)

  19. West, R., Gabrilovich, E., Murphy, K., Sun, S., Gupta, R., Lin, D.: Knowledge base completion via search-based question answering. In: 23rd International Conference on World Wide Web, pp. 515–526. ACM, April 2014

    Google Scholar 

  20. Widdows, D., Peters, S.: Word vectors and quantum logic: experiments with negation and disjunction. Math. Lang. 8, 141–154 (2003)

    Google Scholar 

  21. Widdows, D., Cohen, T.: Reasoning with vectors: a continuous model for fast robust inference. Logic J. IGPL 23(2), 141–173 (2014). jzu028

    Google Scholar 

  22. Yu, M., Dredze, M.: Improving lexical embeddings with semantic knowledge. In: ACL, no. 2, pp. 545–550, June 2014

    Google Scholar 

  23. Zou, H., Hastie, T.: Regularization and variable selection via the elastic net. J. Royal Stat. Soc. Stat. Methodol. 67(2), 301–320 (2005)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Douglas Summers-Stay .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Summers-Stay, D. (2017). Deductive and Analogical Reasoning on a Semantically Embedded Knowledge Graph. In: Everitt, T., Goertzel, B., Potapov, A. (eds) Artificial General Intelligence. AGI 2017. Lecture Notes in Computer Science(), vol 10414. Springer, Cham. https://doi.org/10.1007/978-3-319-63703-7_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-63703-7_11

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-63702-0

  • Online ISBN: 978-3-319-63703-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics