Abstract
Text summarization has become increasingly important in today’s world of information overload. Recently, simpler networks using only attention mechanisms have been tried out for neural machine translation. We propose to use a similar model to carry out the task of text summarization. The proposed model not only trains faster than the usually used recurrent neural network-based architectures but also gives encouraging results. We trained our model on a dump of Wikipedia articles and managed to get a ROUGE-1 f-measure score of 0.54 and BLEU score of 15.74.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Erkan G, Radev DR. LexRank: graph-based lexical centrality as salience in text summarization. https://www.cs.cmu.edu/afs/cs/project/jair/pub/volume22/erkan04a-html/erkan04a.html
Mihalcea R, Tarau P. TextRank: bringing order into texts. https://web.eecs.umich.edu/~mihalcea/papers/mihalcea.emnlp04.pdf
Sutskever I, Vinyals O, Le QV (2014) Sequence to sequence learning with neural networks. arXiv:1409.3215 [cs.CL]
Bahdanau D, Cho K, Bengio Y (2014) Neural machine translation by jointly learning to align and translate. arXiv:1409.0473v7 [cs.CL]
Rush AM, Chopra S, Weston J (2015) A neural attention model for abstractive sentence summarization. arXiv:1509.00685v2 [cs.CL]
Chopra S, Auli M, Rush AM (2016) Abstractive sentence summarization with attentive recurrent neural networks. In: HLT-NAACL
Nallapati R, Zhou B, Nogueira dos santos C, Gulcehre C, Xiang B. Abstractive text summarization using sequence-to-sequence RNNs and beyond. arXiv:1602.06023v5 [cs.CL]
Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser L, Polosukhin I. Attention is all you need. arXiv:1706.03762v4 [cs.CL]
Mahoney M. Wikipedia dump. https://cs.fit.edu/~mmahoney/compression/textdata.html
Goyal P, Goel S, Sethia K. Text summarization for wikipedia articles. http://www.cs.nyu.edu/~pg1338/course_projects/text_summarization.pdf
Park K. A TensorFlow implementation of the transformer: attention is all you need. https://github.com/Kyubyong/transformer
Papineni K, Roukos S, Ward T, Zhu W-J. BLEU: a method for automatic evaluation of machine translation. http://www.aclweb.org/anthology/P02-1040.pdf
Lin C-Y. ROUGE: a package for automatic evaluation of summaries. http://anthology.aclweb.org/W/W04/W04-1013.pdf
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Panchal, R., Pagarkar, A., Kurup, L. (2019). An Attention-Based Approach to Text Summarization. In: Kulkarni, A., Satapathy, S., Kang, T., Kashan, A. (eds) Proceedings of the 2nd International Conference on Data Engineering and Communication Technology. Advances in Intelligent Systems and Computing, vol 828. Springer, Singapore. https://doi.org/10.1007/978-981-13-1610-4_20
Download citation
DOI: https://doi.org/10.1007/978-981-13-1610-4_20
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-13-1609-8
Online ISBN: 978-981-13-1610-4
eBook Packages: EngineeringEngineering (R0)