Advertisement

Better Quality Classifiers for Social Media Content: Crowdsourcing with Decision Trees

  • Ian McCullohEmail author
  • Rachel Cohen
  • Richard Takacs
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 767)

Abstract

As social media use grows and increasingly becomes a forum for social debate in politics, social issues, sports, and brand sentiment; accurately classifying social media sentiment remains an important computational challenge. Social media posts present numerous challenges for text classification. This paper presents an approach to introduce guided decision trees into the design of a crowdsourcing platform to extract additional data features, reduce task cognitive complexity, and improve the quality of the resulting labeled text corpus. We compare the quality of the proposed approach with off-the-shelf sentiment classifiers and a crowdsourced solution without a decision tree using a tweet sample from the social media firestorm #CancelColbert. We find that the proposed crowdsource with decision tree approach results in a training corpus with higher quality, necessary for effective classification of social media content.

Keywords

Social media Sentiment Classifier Machine learning Decision tree Twitter Turk 

Notes

Acknowledgements

This work was supported by the Office of Naval Research, Grant No. N00014-17-1-2981/127025

References

  1. 1.
    E. Kouloumpis, T. Wilson, J.D. Moore, Twitter sentiment analysis: The good the bad and the omg!, in ICWSM 11 (2011), pp. 11538–541Google Scholar
  2. 2.
    A. Bermingham, A.F. Smeaton, Classifying sentiment in microblogs: Is brevity an advantage?, in 19th ACM International Conference on Information and Knowledge Management, ACM (2010), pp. 1833–1836Google Scholar
  3. 3.
    H. Yu, V. Hatzivassiloglou, Towards answering opinion questions: Separating facts from opinions and identifying the polarity of opinion sentences, in EMNLP (2003)Google Scholar
  4. 4.
    D. Maynard, M.A. Greenwood, Who cares about sarcastic tweets? Investigating the impact of sarcasm on sentiment analysis, in Language Resources and Evaluation Conference (2014), pp. 4238–4243Google Scholar
  5. 5.
    A. Reyes, P. Rosso, T. Veale, A multidimensional approach for detecting irony in Twitter, in Language Resources and Evaluation, vol. 47, no. 1 (2013), pp. 239–268CrossRefGoogle Scholar
  6. 6.
    O. Tsur, D. Davidov, A. Rappoport, A great catchy name: Semi-supervised recognition of sarcastic sentences in online product reviews, in Fourth International AAAI Conference on Weblogs and Social Media (2010)Google Scholar
  7. 7.
    R.W. Gibbs, H.L. Colston, in Irony in Language and Thought (Routledge (Taylor and Francis), New York, 2007)Google Scholar
  8. 8.
    C. Bosco, V. Patti, A. Bolioli, Developing corpora for sentiment analysis: The case of irony and senti-TUT. IEEE Intell. Syst. 28(2), 55–63 (2013)CrossRefGoogle Scholar
  9. 9.
    S. Park, M. Ko, J. Kim, Y. Liu, J. Song, The politics of comments: Predicting political orientation of news stories with commenters’ sentiment patterns, in ACM 2011 Conference on Computer Supported Cooperative Work, ACM (2011), pp. 113–122Google Scholar
  10. 10.
    K.-L. Liu, W.-J. Li, M. Guo, Emoticon smoothed language models for twitter sentiment analysis, in AAAI (2012)Google Scholar
  11. 11.
    J. Dunn, Facebook totally dominates the list of most popular social media apps, [online] Business Insider, Available at http://www.businessinsider.com/facebook-dominates-most-popular-social-media-apps-chart-2017-7 (2017)
  12. 12.
    H. Lamba, M.M. Malik, J. Pfeffer, A tempest in a teacup? Analyzing firestorms on twitter, in Advances in Social Networks Analysis and Mining (ASONAM), 2015 IEEE/ACM International Conference, IEEE (2015), pp. 17–24Google Scholar
  13. 13.
    J. Kang, Campaign to ‘Cancel’ Colbert, [online] The New Yorker, Available at https://www.newyorker.com/news/news-desk/the-campaign-to-cancel-colbert (2014)
  14. 14.
    F. Morstatter, J. Pfeffer, H. Liu, When is it biased?: Assessing the representativeness of twitter’s streaming API, in 23rd International Conference on World Wide Web ACM (2014), pp. 555–556Google Scholar
  15. 15.
    M.M. Bradley, P.J. Lang, Affective norms for english words (ANEW) instruction manual and affective ratings, Technical Report C-1, The Center for Research in Psychophysiology University of Florida (2009)Google Scholar
  16. 16.
    F.Å. Nielsen, A new ANEW: Evaluation of a word list for sentiment analysis in microblogs, arXiv preprint arXiv:1103.2903 (2011)
  17. 17.
    X. Zhu, S. Kiritchenko, S. Mohammad, NRC-Canada-2014: Recent improvements in the sentiment analysis of tweets, in SemEval@ COLING (2014), pp. 443–447Google Scholar
  18. 18.
    S. Kiritchenko, X. Zhu, S.M. Mohammad, Sentiment analysis of short informal texts. J. Artif. Intell. Res. 50, 723–762 (2014)CrossRefGoogle Scholar
  19. 19.
    B. Liu, Sentiment analysis and opinion mining, in Synthesis Lectures on Human Language Technologies, vol. 5, no. 1 (2012), pp. 1–167Google Scholar
  20. 20.
    R. Artstein, M. Poesio, Inter-coder agreement for computational linguistics. Comput. Linguist. 34(4), 555–596 (2008)CrossRefGoogle Scholar
  21. 21.
    P.S. Bayerl, K.I. Paul, What determines inter-coder agreement in manual annotations? A meta-analytic investigation. Comput. Linguist. 37(4), 699–725 (2011)CrossRefGoogle Scholar
  22. 22.
    A.F. Hayes, K. Krippendorff, Answering the call for a standard reliability measure for coding data, in Communication Methods and Measures, vol. 1, no. 1, pp. 77–89 (2007)CrossRefGoogle Scholar
  23. 23.
    K.A. Neuendorf, The Content Analysis Guidebook (Sage, 2016)Google Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2020

Authors and Affiliations

  1. 1.Johns Hopkins UniversityLaurelUSA

Personalised recommendations