Abstract
Abstractive text summarization using sequence-to-sequence networks have been successful for short text. However, these models have shown their limitations in summarizing long text as they forget sentences in long distance. We propose an abstractive summarization model using rich features to overcome this weakness. The proposed system has been tested with two datasets: an English dataset (CNN/Daily Mail) and a Vietnamese dataset (Baomoi). Experimental results show that our model significantly outperforms recently proposed models on both datasets.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
Available at http://github.com/phongnt570/UETsegmenter.
- 2.
Available at http://stanfordnlp.github.io/CoreNLP/.
- 3.
Available at https://github.com/abisee/pointer-generator.
References
Rush, A.M., Chopra, S., Weston, J.: A neural attention model for abstractive sentence summarization. arXiv preprint arXiv:1509.00685 (2015)
Nallapati, R., Zhou, B., dos Santos, C., Gulcehre, C., Xing, B.: Abstractive text summarization using sequence-to-sequence RNNs and beyond. arXiv preprint arXiv:1602.06023 (2016)
J. Gu, Z. Lu, Li, H., Li, V.O.K.: Incorporating copying mechanism in sequence-to-sequence learning. arXiv preprint arXiv:1603.06393 (2016)
Vinyals, O., Fortunato, M., Jaitly, N.: Pointer networks. In: Advances in Neural Information Processing Systems, pp. 2692–2700 (2015)
See, A., Liu, P.J., Manning, C.D.: Get to the point: summarization with pointer-generator networks. arXiv preprint arXiv:1704.04368 (2017)
Chen, Q., Zhu, X., Ling, Z., Wei, S., Jiang, H.: Distraction-based neural networks for modeling document. In: International Joint Conference on Artificial Intelligence (2016)
Luong, T., Pham, H., Manning, C.D.: Effective approaches to attention-based neural machine translation, pp. 1412–1421 (2015)
Tu, Z., Lu, Z., Liu, Y., Liu, X., Li, H.: Modeling coverage for neural machine translation (2016)
Pascanu, R., Mikolov, T., Benigo, Y.: On the difficulty of training recurrent neural networks. In: ICML 2013 Proceedings of the 30th International Conference on International Conference on Machine Learning, vol. 28 (2013)
Duchi, J., Hazan, E., Singer, Y.: Adaptive subgradient methods for online learning and stochastic optimization. J. Mach. Learn. Res. 12, 2121–2159 (2011)
Lin, C.Y.: Rouge: a package for automatic evaluation of summaries. In: Text Summarization Branches Out: ACL Workshop (2004)
Yang, Z., Yang, D., Dyer, C., He, X., Smola, A., Hovy, E.: Hierarchical attention networks for document classification. In: Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 1480–1489 (2016)
Narayan, S., Lapata, M., Cohne, S.B.: Don’t give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization. In: EMNLP (2018)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Quoc, V.N., Thanh, H.L., Minh, T.L. (2020). Abstractive Text Summarization Using LSTMs with Rich Features. In: Nguyen, LM., Phan, XH., Hasida, K., Tojo, S. (eds) Computational Linguistics. PACLING 2019. Communications in Computer and Information Science, vol 1215. Springer, Singapore. https://doi.org/10.1007/978-981-15-6168-9_3
Download citation
DOI: https://doi.org/10.1007/978-981-15-6168-9_3
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-15-6167-2
Online ISBN: 978-981-15-6168-9
eBook Packages: Computer ScienceComputer Science (R0)