Skip to main content

Understanding Questions and Extracting Answers: Interactive Quiz Game Application Design

  • Conference paper
  • First Online:
Human Language Technology. Challenges for Computer Science and Linguistics (LTC 2015)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 10930))

Included in the following conference series:

  • 524 Accesses

Abstract

The paper discusses two key tasks performed by a Question Answering Dialogue System (QADS): user question interpretation and answer extraction. The system represents an interactive quiz game application. The information that forms the content of the game is concerned with biographical facts of famous people’s life. The process of a question classification and answer extraction is performed based on a domain-specific taxonomy of semantic roles and relations computing the Expected Answer Type (EAT). Question interpretation is achieved performing a sequence of classification, information extraction, query formalization and query expansion tasks. The expanded query facilitates the search and retrieval of the information. The facts are extracted from Wikipedia pages by means of the same set of semantic relations, whose fillers are identified by trained sequence classifiers and pattern matching tools, and edited to be returned to the player as full-fledged system answers. The results (precision of 85% for the EAT classification of both in questions and answers) show that the presented approach fits the data well and can be considered as a promising method for other QA domains, in particular when dealing with unstructured information.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    http://www.apple.com/ios/siri/.

  2. 2.

    https://developer.amazon.com/alexa.

  3. 3.

    www.wolframalpha.com.

  4. 4.

    http://www.freebase.com/.

  5. 5.

    http://www.wikipedia.org.

  6. 6.

    http://trec.nist.gov/pubs/trec8.

  7. 7.

    https://www.youtube.com/channel/UChPE75Fvvl1HmdAsO7Nzb8w.

  8. 8.

    http://cogcomp.cs.illinois.edu/page/resources/data.

  9. 9.

    http://wordnet.princeton.edu/.

  10. 10.

    Another classification procedure is known as hierarchical classification. Hierarchy of classifiers consists of classifier#1 deciding to which coarse class a question belongs and transfers this information to the corresponding classifier trained specifically to predict this particular question type.

  11. 11.

    To make the game more entertaining, the system can always play with strategies to turn a negative situation in a system’s favour. For example, if no answer was found, the system may ask the player to ask another question claiming that the previous one was not eligible for whatever reasons or the answer to it would lead to quick game end, or alike.

  12. 12.

    http://nlp.stanford.edu/downloads/corenlp.shtml.

  13. 13.

    http://www.cis.upenn.edu/~treebank/.

  14. 14.

    http://wordnet.princeton.edu.

  15. 15.

    We used two CRF implementations from CRF++ (http://crfpp.googlecode.com/svn/trunk/doc/index.html) and CRFsuite [7] with Averaged Perceptron (AP) and Limited-memory BFGS (L-BFGS) training methods.

  16. 16.

    http://opennlp.apache.org/.

  17. 17.

    WoZ experiments participants indicated that ‘not-providing’ an answer was entertaining, giving wrong information, by contrast, was experienced as annoying.

  18. 18.

    Distant supervision method is used when no labeled data is available, see [21].

References

  1. Min, B., Li, X., Grishman, R., Ang, S.: New York University 2012 system for KBP slot filling. In: Proceedings of the 5th Text Analysis Conference (TAC 2012) (2012)

    Google Scholar 

  2. Roth, B., Chrupala, G., Wiegand, M., Singh, M., Klakow, D.: Saarland University spoken language systems at the slot filling task of TAC KBP 2012. In: Proceedings of the 5th Text Analysis Conference (TAC 2012), Gaithersburg, Maryland, USA (2012)

    Google Scholar 

  3. Ferrucci, D., Brown, E., Chu-Carroll, J., Fan, J., Gondek, D., Kalyanpur, A., Lally, A., Murdock, J., Nyberg, E., Prager, J., Schlaefer, N., Welty, C.: Building Watson: an overview of the DeepQA Project. AI Mag. 3(31), 59–79 (2010)

    Article  Google Scholar 

  4. Cortes, C., Vapnik, V.: Support vector networks. Mach. Learn. 20(3), 273–297 (1995)

    MATH  Google Scholar 

  5. Moldovan, D., Harabagiu, S., Pasca, M., Mihalcea, R., Girju, R., Goodrum, R., Rus, V.: The structure and performance of an open-domain question answering system. In: Proceedings of the Association for Computational Linguistics, pp. 563–570 (2000)

    Google Scholar 

  6. Lafferty, J., McCallum, A., Pereira, F.: Conditional random fields: probabilistic models for segmenting and labeling sequence data. In: Proceedings of ICML 2001, pp. 282–289 (2001)

    Google Scholar 

  7. Okazaki, N.: CRFsuite: a fast implementation of Conditional Random Fields (CRFs) (2007). http://www.chokkan.org/software/crfsuite/

  8. Ellis, J.: TAC KBP 2013 slot descriptions (2013). http://surdeanu.info/kbp2013/TAC_2013_KBP_Slot_Descriptions_1.0.pdf

  9. Joachims, T., Finley, T., Yu, C.: Cutting-plane training of structural SVMs. Mach. Learn. 77(1), 27–59 (2009)

    Article  Google Scholar 

  10. Li, X., Roth, D.: Learning question classifiers. In: Proceedings of the COLING 2002, pp. 1–7. Association for Computational Linguistics (2002)

    Google Scholar 

  11. Ratinov, L., Roth, D.: Design challenges and misconceptions in named entity recognition. In: Proceedings of CoNLL 2009, pp. 147–155. Association for Computational Linguistics (2009)

    Google Scholar 

  12. Tjong Kim Sang, E., Buchholz, S.: Introduction to the CoNLL-2000 shared task: chunking. In: Proceedings of the 2nd Workshop on Learning Language in Logic and ConLL 2000, pp. 127–132. Association for Computational Linguistics (2000)

    Google Scholar 

  13. Finkel, J., Grenager, T., Manning, C.: Incorporating non-local information into information extraction systems by Gibbs sampling. In: Proceedings of ACL 2005, pp. 363–370. Association for Computational Linguistics (2005)

    Google Scholar 

  14. Toutanova, K., Klein, D., Manning, C., Singer, Y.: Feature-rich part-of-speech tagging with a cyclic dependency network. In: Proceedings of NAACL 2003, pp. 173–180. Association for Computational Linguistics (2003)

    Google Scholar 

  15. Chrupala, G., Klakow, D.: A named entity labeler for German: exploiting wikipedia and distributional clusters. In: Proceedings of LREC 2010, pp. 552–556. European Language Resources Association (ELRA) (2010)

    Google Scholar 

  16. Jackendoff, R.S.: Semantic Structures. MIT Press, Cambridge (1990)

    Google Scholar 

  17. ISO: Language resource management - Semantic annotation framework - Part 2: Dialogue acts. ISO 24617–2. ISO Central Secretariat, Geneva (2012)

    Google Scholar 

  18. Surdeanu, M.: Overview of the TAC2013 knowledge base population evaluation: English slot filling and temporal slot filling. In: Proceedings of the TAC KBP 2013 Workshop. National Institute of Standards and Technology (2013)

    Google Scholar 

  19. Cohen, J.: A coefficient of agreement for nominal scales. Educ. Psychol. Measur. 20, 37–46 (1960)

    Article  Google Scholar 

  20. Roth, B., Barth, T., Wiegand, M., Singh, M., Klakow, D.: Effective slot filling based on shallow distant supervision methods. In: Proceedings of the TAC KBP 2013 Workshop. National Institute of Standards and Technology (2013)

    Google Scholar 

  21. Mintz, M., Bills, S., Snow, R., Jurafsky, D.: Distant supervision for relation extraction without labeled data. In: Proceedings of the Joint ACL/IJCNLP Conference, pp. 1003–1011 (2009)

    Google Scholar 

  22. Chernov, V., Petukhova, V., Klakow, D.: Linguistically motivated question classification. In: Proceedings of the 20th Nordic Conference on Computational Linguistics (NODALIDA), pp. 51–59 (2015)

    Google Scholar 

  23. Petukhova, V., Gropp, M., Klakow, D., Eigner, G., Topf, M., Srb, S., Moticek, P., Potard, B., Dines, J., Deroo, O., Egeler, R., Meinz, U., Liersch, S.: The DBOX corpus collection of spoken human-human and human-machine dialogues. In: Proceedings of the 9th Language Resources and Evaluation Conference (LREC) (2014)

    Google Scholar 

  24. Petukhova, V.: Understanding questions and finding answers: semantic relation annotation to compute the expected answer type. In: Proceedings of the Tenth Joint ISO - ACL SIGSEM Workshop on Interoperable Semantic Annotation (ISA 2010), Reykjavik, Iceland, pp. 44–52 (2014)

    Google Scholar 

  25. Heilman, M.: Automatic factual question generation from text. Ph.D. thesis, Carnegie Mellon University, USA (2011)

    Google Scholar 

  26. Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., Vanderplas, J., Passos, A., Cournapeau, D., Brucher, M., Perrot, M., Duchesnay, E.: Scikit-learn: machine learning in Python. J. Mach. Learn. Res. 12, 2825–2830 (2011)

    MathSciNet  MATH  Google Scholar 

  27. Kamp, H., Reyle, U.: From discourse to logic. Introduction to modeltheoretic semantics of natural language, formal logic and discourse representation theory. In: Studies in Linguistics and Philosophy, vol. 42. Kluwer, Dordrecht (1993)

    Google Scholar 

  28. Palmer, M., Gildea, D., Kingsbury, P.: The Proposition Bank: an annotated corpus of semantic roles. Comput. Linguist. 31(1), 71–106 (2002)

    Article  Google Scholar 

  29. ICSI: FrameNet (2005). http://framenet.icsi.berkeley.edu

  30. Petukhova, V., Bunt, H.: LIRICS semantic role annotation: design and evaluation of a set of data categories. In: Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC 2008), Paris. ELRA (2008)

    Google Scholar 

  31. Wiegand, M., Klakow, D.: Towards the detection of reliable food-health relationships. In: Proceedings of the NAACL-Workshop on Language Analysis in Social Media (NAACL-LASM), pp. 69–79 (2013)

    Google Scholar 

  32. Bunt, H., Palmer, M.: Conceptual and representational choices in defining an ISO standard for semantic role annotation. In: Proceedings of the Ninth Joint ISO - ACL SIGSEM Workshop on Interoperable Semantic Annotation (ISA-9), Potsdam, pp. 41–50 (2013)

    Google Scholar 

Download references

Acknowledgments

The research reported in this paper was carried out within the DBOX Eureka project under number E! 7152.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Volha Petukhova .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG, part of Springer Nature

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Petukhova, V., Putra, D.D., Chernov, A., Klakow, D. (2018). Understanding Questions and Extracting Answers: Interactive Quiz Game Application Design. In: Vetulani, Z., Mariani, J., Kubis, M. (eds) Human Language Technology. Challenges for Computer Science and Linguistics. LTC 2015. Lecture Notes in Computer Science(), vol 10930. Springer, Cham. https://doi.org/10.1007/978-3-319-93782-3_18

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-93782-3_18

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-93781-6

  • Online ISBN: 978-3-319-93782-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics