Skip to main content

Deep Bi-directional Long Short-Term Memory Neural Networks for Sentiment Analysis of Social Data

  • Conference paper
  • First Online:
Integrated Uncertainty in Knowledge Modelling and Decision Making (IUKM 2016)

Abstract

Sentiment analysis (SA) has been attracting a lot of studies in the field of natural language processing and text mining. Recently, there are many algorithm’s enhancements in various SA applications are investigated and introduced. Deep Convolutional Neural Networks (DCNNs) have recently been shown to give the state-of-the-art performance on sentiment classification of social data. Although, these solutions effectively address issues of multi-levels features presentation but having some limitations of temporal modeling. In addition, the Bi-directional Long Short-Term Memory (BLTSM) conventional models have encountered some limitations in presentation with multi-level features but can keep track of the temporal information while enabling deep representations in the data. In this paper, we propose to use Deep Bi-directional Long Short-Term Memory (DBLSTM) architecture with multi-levels feature presentation for sentiment polarity classification (SPC) on social data. By using DBLSTM, we can exploit more level features than BLTSM and inherit temporal modeling in BLTSM. Moreover, the language of social data is very informal with misspellings and abbreviations. One word can be appeared in multiple formalities, which is a challenge in word-level models. We use character-level as input of DBLSTM neural network (called Character DBLSTM - CDBLSTM) for learning sentence level presentation. The experimental results show that the performance of our model is competitive with state-of-the-art of SPC on Twitter’s data. Our model achieves 85.86 % accuracy on Stanford Twitter Sentiment corpus (STS) and 84.82 % accuracy on the subtasks B of SemEval-2016 Task 4 corpus.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    http://colah.github.io/posts/2015-08-Understanding-LSTMs.

  2. 2.

    https://github.com/Lasagne/Lasagne.

  3. 3.

    We retrained the ascii/rnd/200 on SemEval using AdaGrad and a learning rate of 0.01 to achieve 84.13; using Adam and 0.1 learning rate, the result was 83.21.

References

  1. Ballesteros, M., Dyer, C., Smith, N.A.: Improved transition-based parsing by modeling characters instead of words with lstms. In: EMNLP (2015)

    Google Scholar 

  2. Costa-Jussà, M.R., Fonollosa, J.A.R.: Character-based neural machine translation. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, 7–12 August 2016, Berlin, Germany, vol. 2: Short Papers (2016)

    Google Scholar 

  3. dos Santos, C., Gatti, M.: Deep convolutional neural networks for sentiment analysis of short texts. In :The 25th International Conference on Computational Linguistics (COLING 2014), pp. 69–74. ACM (2014)

    Google Scholar 

  4. Go, A., Bhayani, R., Huang, L.: Twitter sentiment classification using distant supervision. CS224N Project Report, Stanford, 1: 12 (2009)

    Google Scholar 

  5. Graves, A., Jaitly, N., Mohamed, A.R.: Hybrid speech recognition with deep bidirectional lstm. In: 2013 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU), pp. 273–278, December 2013

    Google Scholar 

  6. Graves, A.: Supervised Sequence Labelling with Recurrent Neural Networks. SCI, vol. 385. Springer, Heidelberg (2012)

    MATH  Google Scholar 

  7. Hochreiter, S., Bengio, Y., Frasconi, P., Schmidhuber, J.: Gradient flow in recurrent nets: the difficulty of learning long-term dependencies. In: A Field Guide to Dynamical Recurrent Neural Networks. IEEE Press (2001)

    Google Scholar 

  8. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)

    Article  Google Scholar 

  9. Hong, J., Fang, M.: Sentiment analysis with deeply learned distributed representations of variable length texts. Technical report, Stanford University (2015). CS224d: Deep Learning for Natural Language Processing

    Google Scholar 

  10. Kalchbrenner, N., Grefenstette, E., Blunsom, P.: A convolutional neural network for modelling sentences. In: Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, ACL 2014, 22–27 June 2014, Baltimore, MD, USA, vol. 1: Long Papers, pp. 655–665 (2014)

    Google Scholar 

  11. Karpathy, A., Johnson, J., Li, F.-F.: Visualizing and understanding recurrent networks. CoRR, abs/1506.02078 (2015)

    Google Scholar 

  12. Kim, Y.: Convolutional neural networks for sentence classification. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, 25–29 October 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL, pp. 1746–1751 (2014)

    Google Scholar 

  13. Kim, Y., Jernite, Y., Sontag, D., Rush, A.M.: Character-aware neural language models. In: Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, 12–17 February 2016, Phoenix, Arizona, USA, pp. 2741–2749 (2016)

    Google Scholar 

  14. Kouloumpis, E., Wilson, T., Moore, J.: Twitter sentiment analysis: the good the bad and the omg!. Icwsm 11, 538–541 (2011)

    Google Scholar 

  15. Ling, W., Dyer, C., Black, A.W., Trancoso, I., Fermandez, R., Amir, S., Marujo, L., Luís, T.: Finding function in form: compositional character models for open vocabulary word representation. In: Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP 2015, Lisbon, Portugal, 17–21 September 2015, pp. 1520–1530 (2015)

    Google Scholar 

  16. Mehrotra, R., Sanner, S., Buntine, W., Xie, L.: Improving lda topic models for microblogs via tweet pooling and automatic labeling. In: Proceedings of the 36th International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 889–892. ACM (2013)

    Google Scholar 

  17. Mikolov, T., Chen, K., Corrado, G., Dean, J.: Efficient estimation of word representations in vector space. CoRR, abs/1301.3781 (2013)

    Google Scholar 

  18. Mikolov, T., Sutskever, I., Chen, K., Corrado, G.S., Dean, J.: Distributed representations of words and phrases and their compositionality. In: Advances in Neural Information Processing Systems, pp. 3111–3119 (2013)

    Google Scholar 

  19. Nakov, P., Ritter, A., Rosenthal, S., Stoyanov, V., Sebastiani, F.: SemEval-2016 task 4: sentiment analysis in Twitter. In: Proceedings of the 10th International Workshop on Semantic Evaluation, SemEval 2016, San Diego, California. Association for Computational Linguistics, June 2016

    Google Scholar 

  20. Sak, H., Senior, A.W., Beaufays, F.: Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. CoRR, abs/1402.1128 (2014)

    Google Scholar 

  21. Tai, K.S., Socher, R., Manning, C.D.: Improved semantic representations from tree-structured long short-term memory networks. CoRR, abs/1503.00075 (2015)

    Google Scholar 

  22. Tang, D., Wei, F., Yang, N., Zhou, M., Liu, T., Qin, B.: Learning sentiment-specific word embedding for twitter sentiment classification. In: Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, vol. 1, pp. 1555–1565 (2014)

    Google Scholar 

  23. Thireou, T., Reczko, M.: Bidirectional long short-term memory networks for predicting the subcellular localization of eukaryotic proteins. IEEE/ACM Trans. Comput. Biol. Bioinf. 4(3), 441–446 (2007)

    Article  Google Scholar 

  24. Wang, X., Wei, F., Liu, X., Zhou, M., Zhang, M.: Topic sentiment analysis in twitter: a graph-based hashtag sentiment classification approach. In: Proceedings of the 20th ACM International Conference on Information and Knowledge Management, pp. 1031–1040. ACM (2011)

    Google Scholar 

  25. Zhang, X., Zhao, J., LeCun, Y.: Character-level convolutional networks for text classification. In: Advances in Neural Information Processing Systems, pp. 649–657 (2015)

    Google Scholar 

  26. Zhou, J., Xu, W.: End-to-end learning of semantic role labeling using recurrent neural networks. In: Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics, the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing, ACL 2015, 26–31 July 2015, Beijing, China, vol. 1: Long Papers, pp. 1127–1137 (2015)

    Google Scholar 

Download references

Acknowledgement

This paper is supported by The Vietnam National Foundation for Science and Technology Development (NAFOSTED) under grant number 102.01-2014.22.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Anh-Cuong Le .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing AG

About this paper

Cite this paper

Nguyen, N.K., Le, AC., Pham, H.T. (2016). Deep Bi-directional Long Short-Term Memory Neural Networks for Sentiment Analysis of Social Data. In: Huynh, VN., Inuiguchi, M., Le, B., Le, B., Denoeux, T. (eds) Integrated Uncertainty in Knowledge Modelling and Decision Making. IUKM 2016. Lecture Notes in Computer Science(), vol 9978. Springer, Cham. https://doi.org/10.1007/978-3-319-49046-5_22

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-49046-5_22

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-49045-8

  • Online ISBN: 978-3-319-49046-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics