Advertisement

Sentence Compression with Reinforcement Learning

  • Liangguo Wang
  • Jing Jiang
  • Lejian Liao
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11061)

Abstract

Deletion-based sentence compression is frequently formulated as a constrained optimization problem and solved by integer linear programming (ILP). However, ILP methods searching the best compression given the space of all possible compressions would be intractable when dealing with overly long sentences and too many constraints. Moreover, the hard constraints of ILP would restrict the available solutions. This problem could be even more severe considering parsing errors. As an alternative solution, we formulate this task in a reinforcement learning framework, where hard constraints are used as rewards in a soft manner. The experiment results show that our method achieves competitive performance with a large improvement on the speed.

Keywords

Sentence compression Deep reinforcement learning 

References

  1. 1.
    Clarke, J., Lapata, M.: Global inference for sentence compression: an integer linear programming approach. J. Artif. Intell. Res. 31, 399–429 (2008)CrossRefGoogle Scholar
  2. 2.
    Filippova, K., Alfonseca, E., Colmenares, C.A., Kaiser, L., Vinyals, O.: Sentence compression by deletion with LSTMs. In: Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing (2015)Google Scholar
  3. 3.
    Filippova, K., Strube, M.: Dependency tree based sentence compression. In: Proceedings of the Fifth International Natural Language Generation Conference (2008)Google Scholar
  4. 4.
    Graves, A., Jaitly, N., Mohamed, A.R.: Hybrid speech recognition with deep bidirectional LSTM. In: 2013 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU), pp. 273–278. IEEE (2013)Google Scholar
  5. 5.
    Grissom II, A.: Don’t until the final verb wait: reinforcement learning for simultaneous machine translation (2014)Google Scholar
  6. 6.
    Hori, C., Furui, S.: Speech summarization: an approach through word extraction and a method for evaluation. IEICE Trans. Inf. Syst. 87(1), 15–25 (2004)Google Scholar
  7. 7.
    Jing, H.: Sentence reduction for automatic text summarization. In: Proceedings of the Sixth Conference on Applied Natural Language Processing (2000)Google Scholar
  8. 8.
    Kingma, D., Ba, J.: Adam: a method for stochastic optimization. In: Proceedings of the International Conference on Learning Representations (2015)Google Scholar
  9. 9.
    Knight, K., Marcu, D.: Statistics-based summarization-step one: sentence compression. In: Proceedings of the 17th National Conference on Artificial Intelligence (2000)Google Scholar
  10. 10.
    Lin, C.Y.: ROUGE: a package for automatic evaluation of summaries. In: Text Summarization Branches Out: Proceedings of the ACL-04 workshop, vol. 8. Barcelona, Spain (2004)Google Scholar
  11. 11.
    McDonald, R.T.: Discriminative sentence compression with soft syntactic evidence. In: Proceedings of European Chapter of the Association for Computational Linguistics Valencia (2006)Google Scholar
  12. 12.
    Mnih, V., et al.: Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602 (2013)
  13. 13.
    Narasimhan, K., Yala, A., Barzilay, R.: Improving information extraction by acquiring external evidence with reinforcement learning. arXiv preprint arXiv:1603.07954 (2016)
  14. 14.
    Paulus, R., Xiong, C., Socher, R.: A deep reinforced model for abstractive summarization. arXiv preprint arXiv:1705.04304 (2017)
  15. 15.
    Pennington, J., Socher, R., Manning, C.D.: GloVe: global vectors for word representation. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing (2014)Google Scholar
  16. 16.
    Ryang, S., Abekawa, T.: Framework of automatic text summarization using reinforcement learning. In: Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pp. 256–265. Association for Computational Linguistics (2012)Google Scholar
  17. 17.
    Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction, vol. 1. MIT press, Cambridge (1998)Google Scholar
  18. 18.
    Turner, J., Charniak, E.: Supervised and unsupervised learning for sentence compression. In: Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, pp. 290–297. Association for Computational Linguistics (2005)Google Scholar
  19. 19.
    Vandeghinste, V., Pan, Y.: Sentence compression for automated subtitling: a hybrid approach. In: Proceedings of the ACL workshop on Text Summarization, pp. 89–95 (2004)Google Scholar
  20. 20.
    Vinyals, O., Fortunato, M., Jaitly, N.: Pointer networks. In: Advances in Neural Information Processing Systems, pp. 2692–2700 (2015)Google Scholar
  21. 21.
    Wang, L., Jiang, J., Chieu, H.L., Ong, C.H., Song, D., Liao, L.: Can syntax help? Improving an LSTM-based sentence compression model for new domains. In: Meeting of the Association for Computational Linguistics, pp. 1385–1393 (2017)Google Scholar
  22. 22.
    Watkins, C.J., Dayan, P.: Q-learning. Mach. Learn. 8(3–4), 279–292 (1992)zbMATHGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.Beijing Institute of TechnologyBeijingChina
  2. 2.Singapore Management UniversitySingaporeSingapore

Personalised recommendations