Abstract
Following the analyses of the volume and the topics of Trump’s tweets in Chap. 3, this chapter examines the rhetoric of his tweets. Because media coverage of Trump’s tweets tends to focus on the most shocking and negative of his tweets, those selected tweets do not represent the total breadth of his language on Twitter. We find that, in the aggregate, Trump’s tweets are neither positive nor negative. Instead, they are neutral in tone. Moreover, we find that the extent to which Trump uses negative rhetoric (as opposed to more positive ones) changes over time. In addition, we also examine how tweet sentiment affects the number of retweets that Trump receives. We find that the more negative the tweet, the more retweets it receives. That is, to the extent that Trump uses his first mover advantage, behaves as a strategic communicator, and seeks greater attention on Twitter, he goes negative.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
Trump, Donald J. “Twitter/@realDonaldTrump: Wow, @CNN got caught fixing their “focus group” in order to make Crooked Hillary look better. Really pathetic and totally dishonest!” October 10, 2016, 12:31 PM. https://twitter.com/realDonaldTrump/status/785563318652178432
- 2.
Trump, Donald J. “Twitter/@realDonaldTrump: Despite winning the second debate in a landslide (every poll), it is hard to do well when Paul Ryan and others give zero support!” October 11, 2016, 5:16 AM. https://twitter.com/realDonaldTrump/status/785816454042124288
- 3.
Trump, Donald J. “Twitter/@realDonaldTrump: Never has the press been more inaccurate, unfair or corrupt! We are not fighting the Democrats, they are easy, we are fighting the seriously dishonest and unhinged Lamestream Media. They have gone totally CRAZY. MAKE AMERICA GREAT AGAIN!” August 10, 2019, 5:07 AM. https://twitter.com/realDonaldTrump/status/1160160760179372032
- 4.
In comparison, a typical Trump tweet since he has taken office receives about 18,000 retweets.
- 5.
Please see the Appendix in this chapter for additional details on our approach to measuring tweet sentiment.
- 6.
Trump, Donald J. “Twitter/@realDonaldTrump: I am truly honored and grateful for receiving SO much support from our American heroes …” September 16, 2016, 10:58 AM. https://twitter.com/realDonaldTrump/status/776842647294009344
- 7.
Trump, Donald J. “Twitter/@realDonaldTrump: My supporters are the smartest, strongest, most hard working and most loyal that we have seen in our countries history. It is a beautiful thing to watch as we win elections and gather support from all over the country. As we get stronger, so does our country. Best numbers ever!” June 16, 2018, 6:12 AM. https://twitter.com/realDonaldTrump/status/1007974129474121728
- 8.
Trump, Donald J. “Twitter/@realDonaldTrump: Wow, @CNN got caught fixing their “focus group” in order to make Crooked Hillary look better. Really pathetic and totally dishonest!” October 10, 2016, 12:31 PM. https://twitter.com/realDonaldTrump/status/785563318652178432
- 9.
While the intraclass correlation coefficients (ICC) for all models in the Table 4.3 are low, the plot of the varying intercepts suggests that a multilevel approach is appropriate (Fig. 4.12 in Appendix). Nezlek (2008) recommends that, instead of relying on the ICC as the indicator of whether multilevel modeling is necessary, it is preferable to consider the nature of the data and the data structure. In the case here, our data is longitudinal and using months as the Level-2 grouping variable is appropriate.
References
Aldahawi, Hanaa A., and Stuart M. Allen. 2013. “Twitter Mining in the Oil Business: A Sentiment Analysis Approach.” 2013 IEEE Third International Conference on Cloud and Green Computing: 581–86.
Baccianella, Stefano, Andrea Esuli, and Fabrizio Sebastiani. 2010. “SentiWordNet 3.0: An Enhanced Lexical Resource for Sentiment Analysis and Opinion Mining.” Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC’10): 2200–2204.
Bae, Younggue, and Hongchul Lee. 2012. “Sentiment Analysis of Twitter Audiences: Measuring the Positive or Negative Influence of Popular Twitterers.” Journal of the American Society for Information Science and Technology 63(12): 2521–35.
Bump, Philip. 2016. “Why Donald Trump Tweets Late at Night (and Very Early in the Morning).” Washington Post.
———. 2019. “President Trump, Your Problem Isn’t Bias by Twitter. It’s That You Tweet Too Much.” Washington Post. https://www.washingtonpost.com/politics/2019/07/12/president-trump-your-problem-isnt-bias-by-twitter-its-that-you-tweet-too-much/ (December 27, 2019).
Cambria, Erik, Soujanya Poria, Rajiv Bajpai, and Bjoern Schuller. 2016. “SenticNet 4: A Semantic Resource for Sentiment Analysis Based on Conceptual Primitives.” Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers: 2666–77.
Clarke, Isobelle, and Jack Grieve. 2019. “Stylistic Variation on the Donald Trump Twitter Account: A Linguistic Analysis of Tweets Posted Between 2009 and 2018.” PLoS ONE 14(9): 1–27.
Colley, Dawn F. 2019. “Of Twit-Storms and Demagogues: Trump, Illusory Truths of Patriotism, and the Language of the Twittersphere.” In President Donald Trump and His Political Discourse: Ramifications of Rhetoric via Twitter, ed. Michele Lockhart. New York, NY: Routledge, 33–51.
Giachanou, Anastasia, and Fabio Crestani. 2016. “Like It or Not: A Survey of Twitter Sentiment Analysis Methods.” ACM Computing Surveys 49(2): 1–41.
Griffiths, Brent. 2016. “CNN Pushes Back on Trump’s Claim It ‘rigged’ Focus Group.” Politico. https://www.politico.com/story/2016/10/cnn-trump-rigged-focus-group-debate-229563 (December 24, 2019).
Hu, Minqing, and Bing Liu. 2004. “Mining and Summarizing Customer Reviews.” Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining: 168–77.
Ingram, Mathew. 2017. “The 140-Character President.” Columbia Journalism Review. https://www.cjr.org/special_report/trump-twitter-tweets-president.php (December 13, 2019).
Jockers, Matthew L. 2017. “Syuzhet: An R Package for the Extraction of Sentiment and Sentiment-Based Plot Arcs from Text.” https://github.com/mjockers/syuzhet.
Keith, Tamara. 2017. “From ‘Covfefe’ to Slamming CNN: Trump’s Year in Tweets.” NPR. https://www.npr.org/2017/12/20/571617079/a-year-of-the-trump-presidency-in-tweets (December 13, 2019).
Kertscher, Tom. 2016. “Donald Trump’s Ridiculous Claim That All Polls Show He Won Second Debate with Hillary Clinton.” Politifact. https://www.politifact.com/wisconsin/statements/2016/oct/12/donald-trump/donald-trumps-ridiculous-claim-all-polls-show-he-w/ (December 24, 2019).
Kurtzleben, Danielle. 2017. “What We Learned About the Mood of Trump’s Tweets.” NPR. https://www.npr.org/2017/04/30/526106612/what-we-learned-about-the-mood-of-trumps-tweets (December 24, 2019).
Mohammad, Saif, and Peter Turney. 2010. “Emotions Evoked by Common Words and Phrases: Using Mechanical Turk to Create an Emotion Lexicon.” Proceedings of the NAACL HLT 2010 Workshop on Computational Approaches to Analysis and Generation of Emotion in Text: 26–34.
Nezlek, John B. 2008. “An Introduction to Multilevel Modeling for Social and Personality Psychology.” Social and Personality Psychology Compass 2(2): 842–60.
Ott, Brian L., and Greg Dickinson. 2019. The Twitter Presidency: Donald J. Trump and the Politics of White Rage. New York, NY: Routledge.
Rinker, Tyler. “Sentimentr.” https://github.com/trinker/sentimentr (December 24, 2019).
Roenneberg, Till. 2017. “Twitter as a Means to Study Temporal Behaviour.” Current Biology 27(17): 830–32.
Taboada, Maite et al. 2011. “Lexicon-Based Methods for Sentiment Analysis.” Computational Linguistics 37(2): 267–307.
Tsukayama, Hayley. 2017. “Twitter Is Officially Doubling the Character Limit to 280.” Washington Post. https://www.washingtonpost.com/news/the-switch/wp/2017/11/07/twitter-is-officially-doubling-the-character-limit-to-280/ (December 27, 2019).
Wu, Liang, Fred Morstatter, and Huan Liu. 2016. “SlangSD: Building and Using a Sentiment Dictionary of Slang Words for Short-Text Sentiment Classification.” CoRR: 1–15.
Wynn, Matt, and John Fritze. 2019. “Analysis: Trump More Negative, Prolific on Twitter amid Democratic Impeachment Inquiry.” USA Today. https://www.usatoday.com/in-depth/news/politics/2019/12/23/donald-trumps-tweets-get-negative-impeachment-2020-election-loom/2601246001/ (December 24, 2019).
Yaqub, Ussama, Soon Ae Chun, Vijayalakshmi Atluri, and Jaideep Vaidya. 2017. “Analysis of Political Discourse on Twitter in the Context of the 2016 US Presidential Election.” Government Information Quarterly 34: 613–26.
Zimmer, Ben. 2017. “Looking for the Linguistic Smoking-Gun in a Trump Tweet.” The Atlantic. https://www.theatlantic.com/entertainment/archive/2017/12/looking-for-the-linguistic-smoking-gun-in-a-trump-tweet/547361/ (December 24, 2019).
Author information
Authors and Affiliations
Corresponding author
Appendix
Appendix
Calculating Tweet Sentiment
Sentiment analysis is a common approach in analyzing text data (see, for instance, Aldahawi and Allen 2013; Bae and Lee 2012; Giachanou and Crestani 2016). However, many common methods for conducting sentiment analysis are rudimentary, using only counts of words to calculate a final composite score. Consider the following sentence: “The economy is not doing very good.” Due to the presence of the word “good” in this sentence, many sentiment analysis algorithms would classify this sentence as positive, while the actual meaning of the sentence is anything but. Specifically, we account for valence shifters in texts. Valence shifters are words that modify the meanings of surrounding words. For instance, Rinker (2019) notes that
a negator flips the sign of a polarized word (e.g., “I do not like it.”). An amplifier (intensifier) increases the impact of a polarized word (e.g., “I really like it.”). A de-amplifier (downtoner) reduces the impact of a polarized word (e.g., “I hardly like it.”). An adversative conjunction overrules the previous clause containing a polarized word (e.g., “I like it but it’s not worth it.”).
Because valence shifting words occur regularly in verbal and written communications, and that they shift the meaning (i.e., the polarity) of the words around it, it is important to account for such words when applying sentiment analysis algorithms to textual data. To examine Trump’s tweets via sentiment analysis, while accounting for valence shifters, we use the sentimentr package in R. For each tweet, we split the texts into sentences, calculate the polarity of each sentence, and then construct an average score for the text polarity of each tweet.
We calculate three measures of tweet sentiment. Each individual word can either convey positive sentiment, convey negative sentiment , or be neutral in meaning. Using the text polarity of all of the words in the tweet, the first measure utilizes the standard method for calculate the average sentiment in each tweet. The second measure of tweet sentiment downweights the zero values in the averaging, that is, the neutral words are downweighted to avoid biasing the measure to zero. Finally, the third measure upweights negative words. This approach is appropriate if the speaker is likely to surround negative words with positive words in the same text. This mixture of negative and positive words is likely if the speaker tries to follow polite social convention, but the overall intent of the message is negative.
Since sentimentr employs a dictionary-based approach to calculating sentiment, to further ensure the robustness of the tweet sentiment measure, we use several different dictionaries to calculate tweet sentiment: (1) augmented list of Hu and Liu’s (2004) positive and negative words; (2) modified version of Jockers’s (2017) sentiment lookup table; (3) combined and augmented version of Hu and Liu (2004) and Jockers (2017); (4) filtered version of Mohammad and Turney’s (2010) positive/negative word list; (5) augmented version of Cambria et al.’s (2016) word list; (6) augmented version of Baccianella, Esuli, and Sebastiani’s (2010) list of positive and negative words; (7) filtered version of Wu, Morstatter, and Liu’s (2016) list of positive and negative slang words; and (8) version of Taboada et al.’s (2011) positive/negative word list. As the results of the measure of tweet sentiment using different dictionaries are similar, we elect to use version using the combined and augmented version of Hu and Liu (2004) and Jockers (2017), as it is the default and recommended dictionary in sentimentr based on performance evaluation. Our own evaluation shows that this dictionary does a good job in assessing the tone of Donald Trump’s tweets (see Table 4.1 and associated discussions). For more technical details on the exact mathematical algorithm used for each of the three approaches to measuring tweet sentiments, please see online documentation for the sentimentr package (Version 2.7.1): https://github.com/trinker/sentimentr.
Three Measures of Tweet Sentiment
As shown in Fig. 4.11, the measures of tweet sentiment generated from three separate approaches are very similar. The one notable difference is that the measure that upweights negative words in the algorithm has a slight tail to the left, denoting that some tweets are exceptionally negative. In addition, the three measures of tweet sentiment display high levels of correlation (shown in Table 4.7).
Explaining Tweet Sentiment: Addendum
Explaining Retweets: Addendum
Rights and permissions
Copyright information
© 2020 The Author(s)
About this chapter
Cite this chapter
Ouyang, Y., Waterman, R.W. (2020). Trump Tweets: A Text Sentiment Analysis. In: Trump, Twitter, and the American Democracy. The Evolving American Presidency. Palgrave Macmillan, Cham. https://doi.org/10.1007/978-3-030-44242-2_4
Download citation
DOI: https://doi.org/10.1007/978-3-030-44242-2_4
Published:
Publisher Name: Palgrave Macmillan, Cham
Print ISBN: 978-3-030-44241-5
Online ISBN: 978-3-030-44242-2
eBook Packages: Political Science and International StudiesPolitical Science and International Studies (R0)