Skip to main content

The Digital Misinformation Pipeline

Proposal for a Research Agenda

  • Chapter
  • First Online:
Positive Learning in the Age of Information

Abstract

Digital misinformation poses a major risk to society and thrives on cognitive, social, and algorithmic biases. As social media become engulfed in rumor, hoaxes, and fake news, a “research pipeline” for the detection, monitoring, and checking of digital misinformation is needed. This chapter gives a brief introductory survey to the main research on these topics. The problem of digital misinformation does not lie squarely within a single discipline; instead, it is informed by research in several areas. An integrated research agenda devoted to the implementation of these tools should take into account a wide range of perspectives.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 16.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

Bibliography

  • Adamic, L. A., & Glance, N. (2005). The political blogosphere and the 2004 US election: divided they blog. In Proceedings of the 3rd international workshop on Link discovery (pp. 36–43). ACM.

    Google Scholar 

  • Ciampaglia, G. L., Flammini, A., & Menczer, F. (2015a). The production of information in the attention economy. Scientific Reports, 5(9452). https://doi.org/10.1038/srep09452

  • Ciampaglia, G. L., Shiralkar, P., Rocha, L. M., Bollen, J., Menczer, F., & Flammini. A. (2015b). Computational fact checking from knowledge networks. PLoS ONE, 10(6), e0128193.

    Google Scholar 

  • Conover, M., Ratkiewicz, J., Francisco, M., Gonçalves, B., Flammini, A., & Menczer, F. (2011). Political polarization on Twitter. In Proc. 5th International AAAI Conference on Weblogs and Social Media (ICWSM).

    Google Scholar 

  • Conover, M. D., Gonçalves, B., Flammini, A., & Menczer, F. (2012). Partisan asymmetries in online political activity. EPJ Data Science, 1(1), 6.

    Google Scholar 

  • Conroy, N. J., Rubin, V. L., & Chen, Y. (2015). Automatic deception detection: Methods for finding fake news. Proceedings of the Association for Information Science and Technology, 52(1), 1–4.

    Google Scholar 

  • Davis, C. A., Ciampaglia, G. L., Aiello, L. M., Chung, K., Conover, M. D., Ferrara, E., Flammini, A., Fox, G. C., Gao, X., Gonçalves, B., Grabowicz, P. A., Hong, K., Hui, P.-M., McCaulay, S., McKelvey, K., Meiss, M. R., Patil, S., Kankanamalage, C. P., Pentchev, V., Qiu, J., Ratkiewicz, J., Rudnick, A., Serrette, B., Shiralkar, P., Varol, O., Weng, L., Wu, T.- L., Younge, A. J., & Menczer, F. (2016). OSoMe: the IUNI observatory on social media. PeerJ Computer Science, 2(e87). https://doi.org/10.7717/peerj-cs.87

  • Ferrara, E., Varol, O., Davis, C., Menczer, F., & Flammini, A. (2016). The rise of social bots. Comm. ACM, 59(7), 96–104.

    Google Scholar 

  • Fortunato, S., Flammini, A., Menczer, F., & Vespignani, A. (2006). Topical interests and the mitigation of search engine bias. Proceedings of the National Academy of Sciences, 103(34), 12684–12689.

    Google Scholar 

  • Hassan, N., Arslan, F., Li, C., & Tremayne, M. (2017). Toward automated fact-checking: Detecting check-worthy factual claims by ClaimBuster. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’17, (pp. 1803–1812). New York, NY, USA: ACM.

    Google Scholar 

  • Hotez, P. J. (2016). Texas and its measles epidemics. PLOS Medicine, 13(10),1–5.

    Google Scholar 

  • Howell, L. (2013). Global Risks 2013, chapter Digital Wildfires in a Hyperconnected World (pp.23–27). World Economic Forum, 2013. [Online; accessed 19-August-2015].

    Google Scholar 

  • Joshi, A., Bhattacharyya, P., & Carman, M. J. (2016). Automatic Sarcasm Detection: A Survey. ArXiv e-prints.

    Google Scholar 

  • Knapp, R. H. (1944). A psychology of rumor. Public opinion quarterly, 8(1), 22–37.

    Google Scholar 

  • Lazer, D., Pentland, A., Adamic, L., Aral, S., Barabási, A.-L., Brewer, D., Christakis, N., Contractor, N., Fowler, J., Gutmann, M., Jebara, T., King, G., Macy, M., Roy, D., & Van Alstyne, M. (2009). Computational social science. Science, 323(5915), 721–723.

    Google Scholar 

  • Liu, X., Nourbakhsh, A., Li, Q., Fang, R., & Shah, S. (2015). Real-time rumor debunking on Twitter. In Proceedings of the 24th ACM International on Conference on Information and Knowledge Management, CIKM ’15 (pp. 1867–1870). New York, NY, USA. ACM.

    Google Scholar 

  • McPherson, M., Smith-Lovin, L., & Cook, J. M. (2001). Birds of a feather: Homophily in social networks. Annual review of sociology, 27(1), 415–444.

    Google Scholar 

  • Takis Metaxas, P., Finn, S., & Mustafaraj, E. (2015). Using TwitterTrails.com to investigate rumor propagation. In Proceedings of the 18th ACM Conference Companion on Computer Supported Cooperative Work & Social Computing, CSCW’15 Companion (pp. 69–72). New York, USA: ACM.

    Google Scholar 

  • Mitra, T., & Gilbert, E. (2015). CREDBANK: A large-scale social media corpus with associated credibility annotations. In Proceedings of the International AAAI Conference on Web and Social Media.

    Google Scholar 

  • Nematzadeh, A., Ciampaglia, G. L., Ahn, Y.-Y., & Flammini. A. (2016). From conversation to cacophony: Information overload and collective communication in Twitch. ArXiv eprints.

    Google Scholar 

  • Nematzadeh, A., Ciampaglia, G. L., Menczer, F., & Flammini, A. (2017). How algorithmic popularity bias hinders or promotes quality. ArXiv e-prints.

    Google Scholar 

  • Qiu, X., Oliveira, D. F. M., Sahami Shirazi, A., Flammini, A., & Menczer, F. (2017). Limited individual attention and online virality of low-quality information. Nature Human Behavior, 1(0132). https://doi.org/10.1038/s41562-017-0132

  • Sakaki, T., Okazaki, M., & Matsuo, Y. (2010). Earthquake shakes Twitter users: real-time event detection by social sensors. In Proceedings of the 19th international conference on World Wide Web (pp. 851–860). ACM.

    Google Scholar 

  • Salganik, M. J., Sheridan Dodds, P., & Watts, D. J. (2006). Experimental study of inequality and unpredictability in an artificial cultural market. Science, 311(5762), 854–856.

    Google Scholar 

  • Shao, C., Ciampaglia, G. L., Flammini, A., & Menczer, F. (2016). Hoaxy: A platform for tracking online misinformation. In Proceedings of the 25th International Conference Companion on World Wide Web, WWW ’16 Companion (pp. 745–750). International World Wide Web Conferences Steering Committee.

    Google Scholar 

  • Shao, C., Ciampaglia, G. L., Varol, O., Flammini, A., & Menczer, F. (2017). The spread of fake news by social bots. ArXiv e-prints.

    Google Scholar 

  • Shi, B., & Weninger, T. (2016). Discriminative predicate path mining for fact checking in knowledge graphs. Knowledge-Based Systems, 104, 123–133. https://doi.org/10.1016/j.knosys.2016.04.015

  • Shiralkar, P., Avram, M., Ciampaglia, G. L., Menczer, F., & Flammini, A. (2017a). Relsifter: Scoring triples from typelike relations. In Proceedings of WSDM Cup 2017.

    Google Scholar 

  • Shiralkar, P., Flammini, A., Menczer, F., & Ciampaglia, G. L. (2017b). Finding streams in knowledge graphs to support fact checking. In Proceedings of the 2017 IEEE 17th International Conference on Data Mining.

    Google Scholar 

  • Starbird, K. (2017). Examining the alternative media ecosystem through the production of alternative narratives of mass shooting events on Twitter. In Proceedings of the International AAAI Conference on Web and Social Media (pp. 230–239). Palo Alto, California: AAAI Press.

    Google Scholar 

  • Szabo, G. & Huberman, B. A. (2010). Predicting the popularity of online content. Communications of the ACM, 53(8), 80–88. https://doi.org/10.1145/1787234.1787254

  • Tambuscio, M., Oliveira, D. F. M., Ciampaglia, G. L., & Ruffo, G. (2016). Network segregation in a model of misinformation and fact checking. ArXiv e-prints.

    Google Scholar 

  • Weedon, J., Nuland, W., & Stamos, A. (2017). Information operations and facebook. Retrieved from https://fbnewsroomus.files.wordpress.com/2017/04/facebook-and-information-operations-v1.pdf.

  • Weng, L., Flammini, A., Vespignani, A., & Menczer, F. (2012). Competition among memes in a world with limited attention. Scientific Reports, 2(335). https://doi.org/10.1038/srep00335

  • Wu, L., Morstatter, F., Hu, X., & Liu, H. (2017). Minning Misinformation in Social Media. In M. T. Thai, W. Wu & H. X. (Eds.), Big Data in Complex and Social Networks (pp. 123–152). Boca Raton, FL: CRC Press.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Giovanni Luca Ciampaglia .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Fachmedien Wiesbaden GmbH

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Ciampaglia, G.L. (2018). The Digital Misinformation Pipeline. In: Zlatkin-Troitschanskaia, O., Wittum, G., Dengel, A. (eds) Positive Learning in the Age of Information. Springer VS, Wiesbaden. https://doi.org/10.1007/978-3-658-19567-0_25

Download citation

  • DOI: https://doi.org/10.1007/978-3-658-19567-0_25

  • Published:

  • Publisher Name: Springer VS, Wiesbaden

  • Print ISBN: 978-3-658-19566-3

  • Online ISBN: 978-3-658-19567-0

  • eBook Packages: EducationEducation (R0)

Publish with us

Policies and ethics