Encyclopedia of Database Systems

2018 Edition
| Editors: Ling Liu, M. Tamer Özsu

Web Question Answering

  • Charles L. A. Clarke
Reference work entry
DOI: https://doi.org/10.1007/978-1-4614-8265-9_1363

Synonyms

Web QA

Definition

A question answering (QA) system returns exact answers to questions posed by users in natural language, together with evidence supporting those answers. A Web QA system maintains a corpus of Web pages and other Web resources in order to determine these answers and to provide the required evidence.

A basic QA system might support only simple factual (or “factoid”) questions. For example, the user might pose the question
  • Q1. What is the population of India?

    and receive the answer “1.2 billion,” with evidence provided by the CIA WorldFactbook. A more advanced QA system might support more complex questions, seeking opinions or relationships between entities. Answering these complex questions might require the combination and integration of information from multiple sources. For example, the user might pose the question

  • Q2. What methods are used to transport drugs from Mexico to the USA?

    and hope to receive a summary of information drawn from newspaper articles...

This is a preview of subscription content, log in to check access.

Recommended Reading

  1. 1.
    Agichtein E, Gravano L. Snowball: extracting relations from large plain-text collections. In: Proceedings of the ACM International Conference on Digital Libraries; 2000. p. 85–94.Google Scholar
  2. 2.
    Agichtein E Gravano L. Querying text databases for efficient information extraction. In: Proceedings of the 19th International Conference on Data Engineering; 2003. p. 113–24.Google Scholar
  3. 3.
    Bilotti MW, Ogilvie P, Callan J, Nyberg E. Structured retrieval for question answering. In: Proceedings of the 33rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval; 2007. p. 351–8.Google Scholar
  4. 4.
    Brin S. Extracting patterns and relations from the World Wide Web. In: Proceedings of the International Workshop on the World Wide Web and Databases; 1998. p. 172–83.Google Scholar
  5. 5.
    Clarke CLA, Cormack GV, Lynam TR. Exploiting redundancy in question answering. In: Proceedings of the 24th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval; 2001. p. 358–65.Google Scholar
  6. 6.
    Dumais S, Banko M, Brill E, Lin J, Ng A. Web question answering: is more always better? In: Proceedings of the 25th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval; 2002. p. 291–8.Google Scholar
  7. 7.
    Kushmerick N, Weld DS, Doorenbos RB. Wrapper induction for information extraction. In: Proceedings of the 15th International Joint Conference on AI; 1997. p. 729–37.Google Scholar
  8. 8.
    Kwok C, Etzioni O, Weld DS. Scaling question answering to the Web. ACM Trans Inf Syst. 2001;19(3):242–62.CrossRefGoogle Scholar
  9. 9.
    Lam SKS, Özsu MT. Querying Web data – the WebQA approach. In: Proceedings of the 3rd International Conference on Web Information Systems Engineering; 2002. p. 139–48.Google Scholar
  10. 10.
    Lin J, Katz B. Building a reusable test collection for question answering. J Am Soc Inf Sci Technol. 2006;57(7):851–61.CrossRefGoogle Scholar
  11. 11.
    Marton G, Radul A. Nuggeteer: automatic nugget-based evaluation using descriptions and judgements. In: Proceedings of the Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics; 2006. p. 375–82.Google Scholar
  12. 12.
    Narayanan S, Harabagiu S. Question answering based on semantic structures. In: Proceedings of the 20th International Conference on Computational linguistics; 2004. p. 693–701.Google Scholar
  13. 13.
    Pasca MA, Harabagiu SM. High performance question/answering. In: Proceedings of the 24th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval; 2001. p. 366–74.Google Scholar
  14. 14.
    Prager J. Open-domain question-answering. Found Trends Inf Retr. 2006;1(2):91–231.zbMATHCrossRefGoogle Scholar
  15. 15.
    Prager J, Brown E, Coden A, Radev D. Question-answering by predictive annotation. In: Proceedings of the 23rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval; 2000. p. 184–91.Google Scholar
  16. 16.
    Radev DR, Qi H, Zheng Z, Blair-Goldensohn S, Zhang Z, Fan W, Prager J. Mining the Web for answers to natural language questions. In: Proceedings of the International Conference on Information and Knowledge Management; 2001. p. 143–50.Google Scholar
  17. 17.
    Strzalkowski T, Harabagiu S, editors. Advances in open domain question answering. Secaucus: Springer; 2006.Google Scholar
  18. 18.
    Tellex S, Katz B, Lin J, Fernandes A, Marton G. Quantitative evaluation of passage retrieval algorithms for question answering. In: Proceedings of the 26th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval; 2003. p. 41–7.Google Scholar
  19. 19.
    Voorhees EM. Question answering in TREC. In: Voorhees EM, Harman DK, editors. TREC: experiment and evaluation in information retrieval. Cambridge: MIT; 2005. p. 233–57.Google Scholar
  20. 20.
    Yang H, Chua T-S, Wang S, Koh C-K. Structured use of external knowledge for event-based open domain question answering. In: Proceedings of the 26th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval; 2003. p. 33–40.Google Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2018

Authors and Affiliations

  1. 1.University of WaterlooWaterlooCanada

Section editors and affiliations

  • Cong Yu
    • 1
  1. 1.Google ResearchNew YorkUSA