Advertisement

Corpora and Evaluation for Text Summarisation

  • Parth MehtaEmail author
  • Prasenjit Majumder
Chapter

Abstract

A standard benchmark collection is essential to the reproducibility of any research. Several initial works in text summarisation suffered due to lack of standard evaluation corpora at that time [1, 8]. The advent of conferences like Document Understanding Conference(DUC) [2] and Text Analysis Conference(TAC) [18] solved that problem. These conferences generated standard evaluation benchmarks for text summarisation and as a result streamlined efforts were made possible. Today such benchmark collections of documents and related manually written summaries, provided by DUC and TAC are by far the most widely used collections for text summarisation. These have become essential for reproducibility as well as comparison of cross-system performance. However, with a lot of data-driven approaches being suggested in last few years the DUC and TAC collection, with their hundreds of article summary pairs, are no longer sufficient. There are a few other corpora like the Gigaword corpus and CNN/Dailymail [21] corpus which have millions of document-summary pairs. But these corpora are not publicly available and hence are of limited use. Moreover both these corpora, and also DUC and TAC, consist only of newswires. However, TAC did later introduced a task on biomedical article summarisation, which we discuss later in this chapter. But overall there are few domain-specific corpora that are both substantially large, to benefit the data-driven approaches, as well as publicly available. In this work we propose two new corpora for domain-specific summarisation in legal and scientific domains. The legal corpus consists of judgements delivered by the Supreme Court of India and their associated summary that are handwritten by legal experts. The corpus of scientific articles consists of research papers from the ACL anthology, which is a publicly available repository of research papers from computational linguistics and related domains. In this chapter we briefly discuss the DUC and TAC corpora as well as the corpora developed as a part of this work. We also provide an overview of the various strategies that are used to evaluate summarisation systems.

References

  1. 1.
    Brandow, R., Mitze, K., Rau, L.F.: Automatic condensation of electronic publications by sentence selection. Inf. Process. Manag. 31(5), 675–685 (1995)CrossRefGoogle Scholar
  2. 2.
    Dang, H.T.: Overview of duc 2005. Proc. Doc. Underst. Conf. 2005, 1–12 (2005)Google Scholar
  3. 3.
    Dang, H.T.: Duc 2005: Evaluation of question-focused summarization systems. In: Proceedings of the Workshop on Task-Focused Summarization and Question Answering, pp. 48–55. Association for Computational Linguistics (2006)Google Scholar
  4. 4.
    Donaway, R.L., Drummey, K.W., Mather, L.A.: A comparison of rankings produced by summarization evaluation measures. In: Proceedings of the 2000 NAACL-ANLP Workshop on Automatic summarization, pp. 69–78. Association for Computational Linguistics (2000)Google Scholar
  5. 5.
    Harman, D., Over, P.: The effects of human variation in duc summarization evaluation. Text Summarization Branches Out (2004)Google Scholar
  6. 6.
    Jaidka, K., Chandrasekaran, M.K., Elizalde, B.F., Jha, R., Jones, C., Kan, M.Y., Khanna, A., Molla-Aliod, D., Radev, D.R., Ronzano, F., et al.: The computational linguistics summarization pilot task. In: Proceedings of the Text Analysis Conference (TAC 2011), Gaithersburg, Maryland, USA (2014)Google Scholar
  7. 7.
    Jaidka, K., Chandrasekaran, M.K., Rustagi, S., Kan, M.Y.: Overview of the cl-scisumm 2016 shared task. In: Proceedings of the Joint Workshop on Bibliometric-enhanced Information Retrieval and Natural Language Processing for Digital Libraries (BIRNDL), pp. 93–102 (2016)Google Scholar
  8. 8.
    Kupiec, J., Pedersen, J., Chen, F.: A trainable document summarizer. In: Proceedings of the 18th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 68–73. ACM (1995)Google Scholar
  9. 9.
    Lapata, M., Barzilay, R.: Automatic evaluation of text coherence: models and representations. IJCAI 5, 1085–1090 (2005)Google Scholar
  10. 10.
    Lin, C.Y.: Summary evaluation environment (2001). https://www.isi.edu/cyl/SEE
  11. 11.
    Lin, C.Y.: Rouge: a package for automatic evaluation of summaries. In: Text Summarization Branches Out: Proceedings of the ACL-04 Workshop, vol. 8. Association for Computational Linguistics (2004)Google Scholar
  12. 12.
    Lin, J., Mohammed, S., Sequiera, R., Tan, L., Ghelani, N., Abualsaud, M., McCreadie, R., Milajevs, D., Voorhees, E.M.: Overview of the TREC 2017 real-time summarization track. In: Proceedings of The Twenty-Sixth Text REtrieval Conference, TREC 2017, Gaithersburg, Maryland, USA, 15–17 Nov 2017 (2017). https://trec.nist.gov/pubs/trec26/papers/Overview-RT.pdf
  13. 13.
    Mehta, P., Majumder, P.: Content based weighted consensus summarization. In: European Conference on Information Retrieval, pp. 787–793. Springer (2018)Google Scholar
  14. 14.
    Mehta, P., Majumder, P.: Exploiting Local and Global Performance of Candidate Systems for Aggregation of Summarization Techniques (2018). arXiv:1809.02343
  15. 15.
    Nenkova, A., McKeown, K., et al.: Automatic summarization. Found. Trends® Inf. Retr. 5(2–3), 103–233 (2011)CrossRefGoogle Scholar
  16. 16.
    Nenkova, A., Passonneau, R.: Evaluating content selection in summarization: The pyramid method. In: Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics: Hlt-naacl 2004 (2004)Google Scholar
  17. 17.
    Nenkova, A., Passonneau, R., McKeown, K.: The pyramid method: incorporating human content selection variation in summarization evaluation. ACM Trans. Speech Lang. Process. (TSLP) 4(2), 4 (2007)CrossRefGoogle Scholar
  18. 18.
    Owczarzak, K., Dang, H.T.: Overview of the tac 2011 summarization track: Guided task and aesop task. In: Proceedings of the Text Analysis Conference (TAC 2011), Gaithersburg, Maryland, USA (2011)Google Scholar
  19. 19.
    Papineni, K., Roukos, S., Ward, T., Zhu, W.J.: Bleu: a method for automatic evaluation of machine translation. In: Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, pp. 311–318. Association for Computational Linguistics (2002)Google Scholar
  20. 20.
    Rath, G., Resnick, A., Savage, T.: The formation of abstracts by the selection of sentences. Part I. Sentence selection by men and machines. Am. Doc. 12(2), 139–141 (1961)CrossRefGoogle Scholar
  21. 21.
    See, A., Liu, P.J., Manning, C.D.: Get to the point: summarization with pointer-generator networks. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), vol. 1, pp. 1073–1083 (2017)Google Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2019

Authors and Affiliations

  1. 1.Information Retrieval and Language Processing LabDhirubhai Ambani Institute of Information and Communication TechnologyGandhinagarIndia

Personalised recommendations