Abstract
Non-genuine actors, including social bots and sockpuppets, are automated or manipulated to post information and sentiment to various social media platforms. Though not all social bots and sockpuppets are nefarious, there is evidence that some of these non-genuine accounts—especially on Facebook and Twitter—are maliciously managed by hackers or state actors, to spread misinformation and spam. Trolls may or may not be automated, but are consciously or unconsciously divisive.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Adams, R., & Brown, H. (2017, October 17). These Americans were tricked into working for Russia. They say they had no idea. BuzzFeed News. Retrieved from https://www.buzzfeed.com/rosalindadams/these-americans-were-tricked-into-working-for-russia-they
Agarwal, N., Al-Khateeb, S., Galeano, R., & Goolsby, R. (2017). Examining the use of botnets and their evolution in propaganda dissemination. Defence Strategic Communications, 2, 87–112.
Arnsdorf, I. (2017, August 23). Pro-Russian bots take up the right-wing cause after Charlottesville: Analysts tracking Russian influence operations find a feedback loop between Krelmin propaganda and far-right memes. ProPublica. Retrieved from https://www.propublica.org/article/pro-russian-bots-take-up-the-right-wing-cause-after-charlottesville
Beskow, D. M., & Carley, K. M. (2018). Using random string classification to filter and annotate automated accounts. In R. Thomson, C. Dancy, A. Hyder, & H. Bisgin (Eds.), Social, cultural, and behavioral modeling. SBP-BRiMS 2018. Lecture notes in computer science (Vol. 10899, pp. 367–376). Cham: Springer.
Broniatowski, D. A., Jamison, A. M., Qi, S., AlKulaib, L., Chen, T., Benton, A., et al. (2018). Weaponized health communication: Twitter bots and Russian trolls amplify the vaccine debate. American Journal of Public Health, 108, 1378–1384. https://doi.org/10.2105/AJPH.2018.304567
Bu, Z., Xia, Z., & Wang, J. (2013). A sock puppet detection algorithm on virtual spaces. Knowledge-Based Systems, 37, 366–377.
Carroll, O. (2017, October 17). St Petersburg ‘troll farm’ had 90 dedicated staff working to influence U.S. election campaign. Independent. Retrieved from http://www.independent.co.uk/news/world/europe/russia-us-election-donald-trump-st-petersburg-troll-farm-hillary-clinton-a8005276.html
Chan, A., & Dale, G. (2018). How Russian trolls won American hearts and minds: The techniques were not sophisticated, but the messages were on target. Medium. Retrieved from https://medium.com/techforcampaigns/how-russian-trolls-won-american-hearts-and-minds-30037e1e13b7
Chen, A. (2015, June 2). The agency. New York Times. Retrieved from http://nyti.ms/1AHZ353
Chen, A. (2016, July 27). The real paranoia-inducing purpose of Russian hacks. The New Yorker. Retrieved from https://www.newyorker.com/news/news-desk/the-real-paranoia-inducing-purpose-of-russian-hacks
Chu, Z., Gianvecchio, S., Wang, S., & Jajodia, S. (2010). Who is tweeting on Twitter: human, bot, or cyborg? In Proceedings of the 26th Annual Computer Security Applications Conference (pp. 21–30). Austin, TX: ACM. https://doi.org/10.1145/1920261.1920265
Confessore, N., Dance, G. J. X., Harris, R., & Hansen, M. (2018, January 27). The follower factory. New York Times. Retrieved from https://nyti.ms/2GpOgS0 or https://www.nytimes.com/interactive/2018/01/27/technology/social-media-bots.html
Cook, D., Waugh, B., Abdipanah, M., Hashemi, O., & Rahman, S. A. (2014). Twitter deception and influence: Issues of identity, slacktivism and puppetry. Journal of Information Warfare, 13, 58–71.
Crabb, E. S., Mishler, A., Paletz, S., Hefright, B., & Golonka, E. (2015). Reading between the lines: A prototype model for detecting Twitter sockpuppet accounts using language-agnostic processes. In HCI International 2015 – Posters’ extended abstracts (Vol. 528, pp. 656–661).
Ferrara, E., Varol, O., Davis, C., Menczer, F., & Flammini, A. (2016). The rise of social bots. Communications of the ACM, 59, 96–104.
Filipov, D. (2017, October 8). The notorious Kremlin-linked ‘troll farm’ and the Russians trying to take it down. The Washington Post. Retrieved from https://www.washingtonpost.com/world/asia_pacific/the-notorious-kremlin-linked-troll-farm-and-the-russians-trying-to-take-it-down/2017/10/06/c8c4b160-a919-11e7-9a98-07140d2eed02_story.html?utm_term=.6617c06e105a
Hardaker, C. (2010). Trolling in asynchronous computer-mediated communication: From user discussions to academic definitions. Journal of Politeness Research, 6, 215–242.
Helmus, T. C., Bodine-Baron, E., Radin, A., Magnuson, M., Mendelsohn, J., Marcellino, W., et al. (2018). Russian social media influence: Understanding Russian propaganda in Eastern Europe. Santa Monica, CA: RAND Corporation. Retrieved from www.rand.org/t/RR2237
Kim, Y. M., Hsu, J., Neiman, D., Kou, C., Bankston, L., Kim, S. Y., et al. (2018). The stealth media? Groups and targets behind divisive issue campaigns on Facebook. Political Communication, 35, 515–541.
Lee, K., Caverlee, J., & Webb, S. (2010, July). Uncovering social spammers: Social honeypots + machine learning. SIGIR’10, Geneva, Switzerland.
Liu, D., Wu, Q., Han, W., & Zhou, B. (2015). Sockpuppet gang detection on social media sites. Frontiers of Computer Science, 10, 124–135. https://doi.org/10.1007/s11704-015-4287-7
Lokot, T., & Diakopoulos, N. (2016). News bots. Digital Journalism, 4, 682–699. https://doi.org/10.1080/21670811.2015.1081822
McNamee, R. (2018, January/February/March). How to fix Facebook — before it fixes us. Washington Monthly. Retrieved from https://washingtonmonthly.com/magazine/january-february-march-2018/how-to-fix-facebook-before-it-fixes-us/
Murthy, D., Powell, A. B., Tinati, R., Anstead, N., Carr, L., Halford, S. J., et al. (2016). Bots and political influence: A sociotechnical investigation of social network capital. International Journal of Communication, 10, 4952–4971.
Myers, J. (2018, March 15). Meet the activist who uncovered the Russian troll factory named in the Mueller probe. National Public Radio. Retrieved from https://www.npr.org/sections/parallels/2018/03/15/594062887/some-russians-see-u-s-investigation-into-russian-election-meddling-as-a-soap-ope
Oates, S. (2016). Russian media in the digital age: Propaganda rewired. Russian Politics, 1, 398–417.
Office of the Director of National Intelligence. (2017). Intelligence community assessment: Assessing Russian activities and intentions in recent US elections (ICA 2017-01D). Washington, DC: Office of the Director of National Intelligence. Retrieved from https://www.dni.gov/files/documents/ICA_2017_01.pdf
Paul, C., & Matthews, M. (2016). The Russian “firehose of falsehood” propaganda model: Why it might work and options to counter it. Santa Monica, CA: RAND Corporation. Retrieved from https://www.rand.org/pubs/perspectives/PE198.html
Shane, S., & Isaac, M. (2017, November 3). Facebook says it’s policing fake accounts. But they’re still easy to spot. New York Times. Retrieved from https://www.nytimes.com/2017/11/03/technology/facebook-fake-accounts.html
Shao, C., Ciampaglia, G., Varol, O., Flammini, A., & Menczer, F. (2017). The spread of fake news by social bots. ArXiv e-prints. arXiv:1707.07592 [cs.SI]
Solorio, T., Hasan, R., & Mizan, M. (2013). A case study of sockpuppet detection in Wikipedia. In Proceedings of the Workshop on Language in Social Media (LASM 2013) (pp. 59–68). Atlanta, GA.
Stewart, L. G., Arif, A., & Starbird, K. (2018). Examining trolls and polarization within a retweet network. In MIS2, 2018, Marina Del Rey, CA.
Stukal, D., Sanovich, S., Bonneau, R., & Tucker, J. A. (2017). Detecting bots on Russian political Twitter. Big Data, 5(4), 310–324.
Sydell, L. (2017, October 29). How Russian propaganda spreads on social media. National Public Radio. Retrieved from http://www.npr.org/sections/alltechconsidered/2017/10/29/560461835/how-russian-propaganda-spreads-on-social-media
Twitter Public Policy. (2018, January 19). Update on Twitter’s review of the 2016 U.S. election [Blog post]. Retrieved from https://blog.twitter.com/official/en_us/topics/company/2018/2016-election-update.html
United States of America v. Internet Research Agency LLC, 18 U.S.C. §§ 2, 371, 1349, 1028A (District Court for the District of Columbia, 2018).
Varol, O., Ferrara, E., Davis, C., Menczer, F., & Flammini, A. (2017). Online human-bot interactions: Detection, estimation, and characterization. In International AAAI Conference on Web and Social Media (pp 280–289). arXiv:1703.03107 [cs.SI].
Villamarín-Salomón, R., & Brustoloni, J. C. (2009). Bayesian bot detection based on DNS traffic similarity. In SAC’90, Honolulu, Hawaii.
Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science, 359, 1146–1151.
Woolley, S. C., & Howard, P. N. (2017). Computational propaganda worldwide: Executive summary (Working Paper No. 2017.11). Oxford, UK: University of Oxford. Retrieved from http://comprop.oii.ox.ac.uk/wp-content/uploads/sites/89/2017/06/Casestudies-ExecutiveSummary.pdf
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2019 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this chapter
Cite this chapter
Paletz, S.B.F., Auxier, B.E., Golonka, E.M. (2019). Non-Genuine Actors. In: A Multidisciplinary Framework of Information Propagation Online. SpringerBriefs in Complexity. Springer, Cham. https://doi.org/10.1007/978-3-030-16413-3_6
Download citation
DOI: https://doi.org/10.1007/978-3-030-16413-3_6
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-16412-6
Online ISBN: 978-3-030-16413-3
eBook Packages: Social SciencesSocial Sciences (R0)