Advertisement

A Deep Learning-Inspired Method for Social Media Satire Detection

  • Sayandip DuttaEmail author
  • Anit Chakraborty
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 898)

Abstract

In this paper, we put forward an effective approach of segmentation of sentiment in social media texts that may include informal language or pop culture texts. We introduce a method to churn out vector representations from phrase-level sentences. We train a recurrent neural network combining quantitative and qualitative methods with lexical features stored in gold standard array of lexicons. In this work, we extract opinion expression using deep RNNs in the form of a token-level sequence-labeling sentiment from variable length of text corpuses. Furthermore, in this paper, we have introduced a novel approach to determine whether the article is satirical or not via the combination of computational linguistics and machine learning tools. We have compared the performance of our algorithm with respect to the benchmark methods, on satire detection as well, on benchmark datasets, news articles, and social media platforms for better reflection of the experiment, and we yielded competitive and satisfactory results.

Keywords

Recurrent deep neural network Word2vec Sentiment analysis in social media Deep learning Machine learning 

References

  1. 1.
    Maas, A.L., Daly, R.E., Pham, P.T., Huang, D., Ng, A.Y., Potts, C.: Learning word vectors for sentiment analysis. In: Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, Vol. 1, pp. 142–150. Association for Computational Linguistics (June, 2011)Google Scholar
  2. 2.
    Pennebaker, J.W., Chung, C.K., Ireland, M., Gonzales, A., Booth, R.J.: The Development and Psychometric Properties of LIWC2007. LIWC.net, Austin, TX (2007)Google Scholar
  3. 3.
    Stone, P.J., Dunphy, D.C., Smith, M.S.: The General Inquirer: A Computer Approach to Content Analysis (1966)Google Scholar
  4. 4.
    Hu, M., Liu, B.: Mining and summarizing customer reviews. In: Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 168–177. ACMGoogle Scholar
  5. 5.
    Bradley, M.M., Lang, P.J.: Affective Norms for English Words (ANEW): Instruction Manual and Affective Ratings (1999)Google Scholar
  6. 6.
    Fellbaum, C.: WordNet: An Electronic Lexical Database. MIT Press, Cambridge, MA (1998)Google Scholar
  7. 7.
    Cambria, E., Havasi, C., Hussain, A.: SenticNet 2. In: Proceeding of AAAI IFAI RSC-12 (2012)Google Scholar
  8. 8.
    Elman, J.L.: Finding structure in time. Cogn. Sci. 14(2), 179–211 (1990)CrossRefGoogle Scholar
  9. 9.
    Mikolov, T., Kombrink, S., Burget, L., Cernocky, J.H., Khudanpur, S.: Extensions of recurrent neural network language model. In: 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 5528–5531. IEEE (2011)Google Scholar
  10. 10.
    Mesnil, G., He, X., Deng, L., Bengio, Y.: Investigation of recurrent-neural network architectures and learning methods for spoken language understanding. In: Interspeech (2013)Google Scholar
  11. 11.
    Hermans, M., Schrauwen, B.L.: Training and analysing deep recurrent neural networks. In: Advances in Neural Information Processing Systems, pp. 190–198 (2013)Google Scholar
  12. 12.
    Forman, G.: BNS scaling: an improved representation over TF-IDF for SVM text classification. In: Proceedings of the 17th International Conference on Information and Knowledge Management, pp. 263–270, Napa Valley, USA (2008)Google Scholar
  13. 13.
    Finkel, J.R., Grenager, T., Manning, C.: Incorporating non-local information into information extraction systems by gibbs sampling. In: Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, pp. 363–370. Association for Computational Linguistics (2005, June)Google Scholar
  14. 14.
    Hutto, C.J., Gilbert, E.: Vader: a parsimonious rule-based model for sentiment analysis of social media text. In: Eighth International AAAI Conference on Weblogs and Social Media (May, 2014)Google Scholar
  15. 15.
    Joshi, A., Tripathi, V., Patel, K., Bhattacharyya, P., Carman, M.: Are word embedding-based features for sarcasm detection? In: EMNLP 2016 (2016)Google Scholar
  16. 16.
    Bouazizi, M., Ohtsuki, T.: Sarcasm detection in Twitter: “All Your Products are Incredibly Amazing!!!”-are they really?. In: 2015 IEEE Global Communications Conference (GLOBECOM), pp. 1–6. IEEE (2015)Google Scholar
  17. 17.
    Ghosh, A., Veale, T.: Fracking sarcasm using neural network. In: WASSA NAACL 2016 (2016)Google Scholar
  18. 18.
    Amir, S., Wallace, B.C., Lyu, H., Silva, P.C.M.J.: Modelling context with user embeddings for sarcasm detection in social media. In: CoNLL 2016, p. 167 (2016)Google Scholar
  19. 19.
    Abercrombie, G., Hovy, D.: Putting Sarcasm detection into context: the effects of class imbalance and manual labelling on supervised machine classification of Twitter conversations. ACL 2016(2016), 107 (2016)Google Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2019

Authors and Affiliations

  1. 1.MCKV Institute of EngineeringHowrahIndia
  2. 2.RCC Institute of Information TechnologyKolkataIndia

Personalised recommendations