Skip to main content

Wear the Right Head: Comparing Strategies for Encoding Sentences for Aspect Extraction

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 11832))

Abstract

In this work we investigate the impact of encoding mechanisms used in neural aspect extraction models on the quality of the resulting aspects. We concentrate on the neural attention-based aspect extraction (ABAE) model and evaluate five different types of encoding mechanisms: simple averaging, self-attention with and without positional encoding, recurrent, and convolutional architectures. Our experiments on four datasets of user reviews demonstrate that, in the family of ABAE-like architectures, all models with different encoding mechanisms show the similar results in terms of standard coherence metrics for English and Russian data. Our qualitative study shows that all models yield interpretable aspects as well, and the difference in quality is often very minor.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    https://github.com/dartrevan/health_data.

References

  1. Alimova, I., Tutubalina, E., Alferova, J., Gafiyatullina, G.: A machine learning approach to classification of drug reviews in Russian. In: 2017 Ivannikov ISPRAS Open Conference (ISPRAS), pp. 64–69. IEEE (2017)

    Google Scholar 

  2. Bahdanau, D., Cho, K., Bengio, Y., Aharoni, R.: Neural machine translation by jointly learning to align and translate. In: Proceedings of International Conference of Learning Representation (2014)

    Google Scholar 

  3. Blei, D.M., Ng, A.Y., Jordan, M.I.: Latent Dirichlet allocation. J. Mach. Learn. Res. 3(4–5), 993–1022 (2003)

    MATH  Google Scholar 

  4. Collobert, R., Weston, J.: A unified architecture for natural language processing: deep neural networks with multitask learning. In: Proceedings of the 25th International Conference on Machine Learning, pp. 160–167. ACM (2008)

    Google Scholar 

  5. Collobert, R., Weston, J., Bottou, L., Karlen, M., Kavukcuoglu, K., Kuksa, P.: Natural language processing (almost) from scratch. J. Mach. Learn. Res. 12(Aug), 2493–2537 (2011)

    MATH  Google Scholar 

  6. Ganu, G., Elhadad, N., Marian, A.: Beyond the stars: improving rating predictions using review text content. In: WebDB, vol. 9, pp. 1–6. Citeseer (2009)

    Google Scholar 

  7. Gehring, J., Auli, M., Grangier, D., Yarats, D., Dauphin, Y.N.: Convolutional sequence to sequence learning. In: Proceedings of International Conference on Machine Learning (2017)

    Google Scholar 

  8. He, R., Lee, W.S., Ng, H.T., Dahlmeier, D.: An unsupervised neural attention model for aspect extraction. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, vol. 1: Long Papers, pp. 388–397 (2017)

    Google Scholar 

  9. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997). Based on TR FKI-207-95, TUM (1995)

    Article  Google Scholar 

  10. Kirkpatrick, J., et al.: Overcoming catastrophic forgetting in neural networks. Proc. Natl. Acad. Sci. 114(13), 3521–3526 (2017)

    Article  MathSciNet  Google Scholar 

  11. Lawrence, S., Giles, C.L., Fong, S.: Natural language grammatical inference with recurrent neural networks. IEEE Trans. Knowl. Data Eng. 12(1), 126–140 (2000)

    Article  Google Scholar 

  12. Mikolov, T., Karafiát, M., Burget, L., Černockỳ, J., Khudanpur, S.: Recurrent neural network based language model. In: Eleventh Annual Conference of the International Speech Communication Association (2010)

    Google Scholar 

  13. Mitcheltree, C., Wharton, V., Saluja, A.: Using aspect extraction approaches to generate review summaries and user profiles. In: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, vol. 3 (Industry Papers), pp. 68–75 (2018)

    Google Scholar 

  14. Nikolenko, S.I.: Topic quality metrics based on distributed word representations. In: Proceedings of the 39th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 1029–1032. ACM (2016)

    Google Scholar 

  15. Nikolenko, S.I., Tutubalina, E., Malykh, V., Shenbin, I., Alekseev, A.: AspeRa: aspect-based rating prediction model. In: Azzopardi, L., Stein, B., Fuhr, N., Mayr, P., Hauff, C., Hiemstra, D. (eds.) ECIR 2019. LNCS, vol. 11438, pp. 163–171. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-15719-7_21

    Chapter  Google Scholar 

  16. Tutubalina, E.: Identifying product failures from reviews in noisy data by distant supervision. In: Ngonga Ngomo, A.-C., Křemen, P. (eds.) KESW 2016. CCIS, vol. 649, pp. 142–156. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-45880-9_12

    Chapter  Google Scholar 

  17. Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, pp. 5998–6008 (2017)

    Google Scholar 

  18. Weston, J., Bengio, S., Usunier, N.: WSABIE: scaling up to large vocabulary image annotation. In: Walsh, T. (ed.) IJCAI, pp. 2764–2770. IJCAI/AAAI (2011). http://dblp.uni-trier.de/db/conf/ijcai/ijcai2011.html#WestonBU11

  19. Zhang, L., Wang, S., Liu, B.: Deep learning for sentiment analysis: a survey. Wiley Interdisc. Rev. Data Min. Knowl. Discov. 8(4), e1253 (2018)

    Article  Google Scholar 

Download references

Acknowledgements

Work on problem definition and model development was carried out at the Samsung-PDMI Joint AI Center at PDMI RAS and supported by Samsung Research. Work on experiments with Russian-language datasets was carried out by S.N. and E.T. and supported by the Russian Science Foundation grant no. 18-11-00284.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Anton Alekseev .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Malykh, V., Alekseev, A., Tutubalina, E., Shenbin, I., Nikolenko, S. (2019). Wear the Right Head: Comparing Strategies for Encoding Sentences for Aspect Extraction. In: van der Aalst, W., et al. Analysis of Images, Social Networks and Texts. AIST 2019. Lecture Notes in Computer Science(), vol 11832. Springer, Cham. https://doi.org/10.1007/978-3-030-37334-4_15

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-37334-4_15

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-37333-7

  • Online ISBN: 978-3-030-37334-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics