In this chapter we present an overview of the work that we discussed throughout the book and point out to some open questions and possible research directions. We proposed several techniques that can improve or compliment the existing sentence extraction systems. We introduced two new corpus consisting of Legal and scientific articles that can be used for evaluating sentence compression and abstractive summarisation systems. We then proposed a attention model-based sentence extraction technique that is capable of identifying key information from the documents, without requiring any manually labelled data. We showed that such techniques that use large number of pseudo-labelled data can easily outperform the systems that use domain knowledge and manual annotations.
- 1.Dehghani, M., Zamani, H., Severyn, A., Kamps, J., Croft, W.B.: Neural ranking models with weak supervision. In: Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 65–74. ACM (2017)Google Scholar
- 2.Liang, C., Berant, J., Le, Q., Forbus, K.D., Lao, N.: Neural symbolic machines: Learning semantic parsers on freebase with weak supervision. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), vol. 1, pp. 23–33 (2017)Google Scholar
- 3.Mehta, P., Majumder, P.: Content based weighted consensus summarization. In: European Conference on Information Retrieval, pp. 787–793. Springer (2018)Google Scholar
- 4.Rush, A.M., Chopra, S., Weston, J.: A neural attention model for abstractive sentence summarization. In: Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Lisbon, Portugal (2015)Google Scholar