Semantic Skeleton Thesauri for Question Answering Bots

  • Boris Galitsky


We build a question–answering (Q/A) chatbot component for answering complex questions in poorly formalized and logically complex domains. Answers are annotated with deductively linked logical expressions (semantic skeletons), which are to be matched with formal representations for questions. We utilize a logic programming approach so that the search for an answer is implemented as determining clauses (associated with this answer) from which the formal representation of a question can be deduced. This Q/A technique has been implemented for the financial and legal domains, which are rather sophisticated on one hand and requires fairly precise answers on the other hand.


  1. Acheampong KN, Pan Z-H, Zhou E-Q, Li X-Y (2016) Answer triggering of factoid questions: a cognitive approach. In: 13th International Computer Conference on Wavelet Active Media Technology and Information Processing (ICCWAMTIP)Google Scholar
  2. Baral C, Gelfond M, Scherl R (2004) Using answer set programming to answer complex queries. In: Workshop on pragmatics of question answering at HLT-NAAC2004Google Scholar
  3. Chen D, A Fisch, J Weston, A Bordes (2017) Reading Wikipedia to answer open-domain questions. CrossRefGoogle Scholar
  4. Galitsky B (2003) Natural language question answering system: technique of semantic headers. Advance Knowledge International, AustraliaGoogle Scholar
  5. Galitsky B (2004) Use of default reasoning for disambiguation under question answering. In: FLAIRS conference, pp 496–501Google Scholar
  6. Galitsky B (2005) Disambiguation via default reasoning for answering complex questions. Intl J AI Tools N1-2:157–175CrossRefGoogle Scholar
  7. Galitsky B, Botros S (2012) Searching for associated events in log data. US Patent 8,306,967Google Scholar
  8. Galitsky B, Kovalerchuk B (2014) Clusters, orders, and trees: methods and applications, pp 341–376Google Scholar
  9. Galitsky B, Pampapathi R (2005) Can many agents answer questions better than one? First Monday 10(1).
  10. Galitsky B, Dobrocsi G, De La Rosa JL, Kuznetsov SO (2010) From generalization of syntactic parse trees to conceptual graphs. In: International conference on conceptual structures, pp 185–190Google Scholar
  11. Galitsky B, Ilvovsky D, Strok F, Kuznetsov SO (2013a) Improving text retrieval efficiency with pattern structures on parse thickets. In: Proceedings of FCAIR@IJCAI, pp 6–21Google Scholar
  12. Galitsky B, Kuznetsov SO, Usikov D (2013b) Parse thicket representation for multi-sentence search. In: International conference on conceptual structures, pp 153–172Google Scholar
  13. Galitsky B, Ilvovsky D, Kuznetsov SO, Strok F (2014) Finding maximal common sub-parse thickets for multi-sentence search. In: Graph structures for knowledge representation and reasoning, IJCAI workshop, pp 39–57CrossRefGoogle Scholar
  14. Kovalerchuk B, Smigaj A (2015) Computing with words beyond quantitative words: incongruity modeling. In: 2015 annual conference of the north American fuzzy information processing society (NAFIPS). Redmond, WAGoogle Scholar
  15. Maybury MT (2000) Adaptive multimedia information access – ask questions, get answers. In: First international conference on adaptive hypertext AH 00, Trento, ItalyGoogle Scholar
  16. Moldovan D, Pasca M, Harabagiu S, Surdeanu M (2002) Performance issues and error analysis in an open-domain question answering system. In: ACL-2002Google Scholar
  17. Ng HT, Lai Pheng Kwan J, Xia Y (2001) Question answering using a large text database: a machine learning approach. In: Proceedings of the 2001 conference on empirical methods in natural language processing. EMNLP 2001, PittsburghGoogle Scholar
  18. Pasca M (2003) Open-domain question answering from large text collections. CSLI Publication seriesGoogle Scholar
  19. Popescu A-M, Etzioni O, Kautz H (2003) Towards a theory of natural language interfaces to databases. Intelligent user interfaceGoogle Scholar
  20. Rajpurkar P, Zhang J, Lopyrev K, Liang P (2016) Squad: 100,000+ questions for machine comprehension of text.
  21. Rus V, Moldovan D (2002) High precision logic form transformation. Int J AI Tools 11(3):437–454CrossRefGoogle Scholar
  22. Sidorov G (2014) Should syntactic N-grams contain names of syntactic relations? Int J Comput Linguist Appl 5(1):139–158Google Scholar
  23. Winograd T (1972) Understanding natural language. Academic, New YorkGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Boris Galitsky
    • 1
  1. 1.Oracle (United States)San JoseUSA

Personalised recommendations