Skip to main content

Studio Ousia’s Quiz Bowl Question Answering System

  • Conference paper
  • First Online:
The NIPS '17 Competition: Building Intelligent Systems

Abstract

In this chapter, we describe our question answering system, which was the winning system at the Human–Computer Question Answering (HCQA) Competition at the Thirty-first Annual Conference on Neural Information Processing Systems (NIPS). The competition requires participants to address a factoid question answering task referred to as quiz bowl. To address this task, we use two novel neural network models and combine these models with conventional information retrieval models using a supervised machine learning model. Our system achieved the best performance among the systems submitted in the competition and won a match against six top human quiz experts by a wide margin.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 54.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    http://protobowl.com/

  2. 2.

    The dataset was obtained from the authors’ website: https://cs.umd.edu/~miyyer/qblearn/.

  3. 3.

    https://developers.google.com/freebase/

  4. 4.

    https://wikipedia2vec.github.io/

  5. 5.

    http://pytorch.org

  6. 6.

    The mapping was obtained from FIGER’s GitHub repository: https://github.com/xiaoling/figer/.

  7. 7.

    http://pytorch.org/

  8. 8.

    We aggregate probabilities because an entity can have multiple entity types in both the coarse-grained and the fine-grained models.

  9. 9.

    We use the list of stop words contained in the scikit-learn library.

  10. 10.

    We use Apache OpenNLP to detect noun words and proper noun words.

  11. 11.

    https://github.com/Microsoft/LightGBM

References

  • Jordan Boyd-Graber, Brianna Satinoff, He He, and Hal Daume III. Besting the Quiz Master: Crowdsourcing Incremental Classification Games. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1290–1301, 2012.

    Google Scholar 

  • Mohit Iyyer, Jordan Boyd-Graber, Leonardo Claudino, Richard Socher, and Hal Daumé III. A Neural Network for Factoid Question Answering over Paragraphs. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 633–644, 2014.

    Google Scholar 

  • Mohit Iyyer, Varun Manjunatha, Jordan Boyd-Graber, and Hal Daumé III. Deep Unordered Composition Rivals Syntactic Methods for Text Classification. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1681–1691, 2015.

    Google Scholar 

  • Ikuya Yamada, Hiroyuki Shindo, Hideaki Takeda, and Yoshiyasu Takefuji. Learning Distributed Representations of Texts and Entities from Knowledge Base. Transactions of the Association for Computational Linguistics, 5:397–411, 2017.

    Google Scholar 

  • Yoon Kim. Convolutional Neural Networks for Sentence Classification. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 1746–1751, 2014.

    Google Scholar 

  • Rada Mihalcea and Andras Csomai. Wikify!: Linking Documents to Encyclopedic Knowledge. In Proceedings of the Sixteenth ACM Conference on Information and Knowledge Management, pages 233–242, 2007.

    Google Scholar 

  • Ikuya Yamada, Hiroyuki Shindo, Hideaki Takeda, and Yoshiyasu Takefuji. Joint Learning of the Embedding of Words and Entities for Named Entity Disambiguation. In Proceedings of the 20th SIGNLL Conference on Computational Natural Language Learning, pages 250–259, 2016.

    Google Scholar 

  • Diederik Kingma and Jimmy Ba. Adam: A Method for Stochastic Optimization. arXiv preprint arXiv:1412.6980v9, 2014.

    Google Scholar 

  • Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. Journal of Machine Learning Research, 15:1929–1958, 2014.

    MathSciNet  MATH  Google Scholar 

  • Xiao Ling and Daniel S. Weld. Fine-Grained Entity Recognition. In Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence, pages 94–100, 2012.

    Google Scholar 

  • Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. A Convolutional Neural Network for Modelling Sentences. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 655–665, Baltimore, Maryland, 2014. Association for Computational Linguistics.

    Google Scholar 

  • Jeffrey Pennington, Richard Socher, and Christopher D Manning. GloVe: Global Vectors for Word Representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 1532–1543, 2014.

    Google Scholar 

  • Wen-tau Yih, Ming-Wei Chang, Christopher Meek, and Andrzej Pastusiak. Question Answering Using Enhanced Lexical Semantic Models. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1744–1753, 2013.

    Google Scholar 

  • Lei Yu, Karl Moritz Hermann, Phil Blunsom, and Stephen Pulman. Deep Learning for Answer Sentence Selection. arXiv preprint arXiv:1412.1632v1, 2014.

    Google Scholar 

  • Jerome H. Friedman. Greedy Function Approximation: A Gradient Boosting Machine. The Annals of Statistics, 29(5):1189–1232, 2001.

    Article  MathSciNet  Google Scholar 

  • O Chapelle and Y Chang. Yahoo! Learning to Rank Challenge Overview. In Proceedings of the Learning to Rank Challenge, volume 14 of Proceedings of Machine Learning Research, pages 1–24, 2011.

    Google Scholar 

  • Dawei Yin, Yuening Hu, Jiliang Tang, Tim Daly, Mianwei Zhou, Hua Ouyang, Jianhui Chen, Changsung Kang, Hongbo Deng, Chikashi Nobata, Jean-Marc Langlois, and Yi Chang. Ranking Relevance in Yahoo Search. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 323–332, 2016.

    Google Scholar 

  • David H Wolpert. Stacked generalization. Neural networks, 5(2):241–259, 1992.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ikuya Yamada .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG, part of Springer Nature

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Yamada, I., Tamaki, R., Shindo, H., Takefuji, Y. (2018). Studio Ousia’s Quiz Bowl Question Answering System. In: Escalera, S., Weimer, M. (eds) The NIPS '17 Competition: Building Intelligent Systems. The Springer Series on Challenges in Machine Learning. Springer, Cham. https://doi.org/10.1007/978-3-319-94042-7_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-94042-7_10

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-94041-0

  • Online ISBN: 978-3-319-94042-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics