Advertisement

Fact-Check Spreading Behavior in Twitter: A Qualitative Profile for False-Claim News

  • Francisco S. MarcondesEmail author
  • José João Almeida
  • Dalila Durães
  • Paulo Novais
Conference paper
  • 308 Downloads
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 1160)

Abstract

Fact-check spread is usually performed by a plain tweet with just the link. Since it is not proper human behavior, it may cause uncanny, hinder the reader’s attention and harm the counter-propaganda influence. This paper presents a profile of fact-check link spread in Twitter (suiting for TRL-1) and, as an additional outcome, proposes a preliminary behavior design based on it (suiting for TRL-2). The underlying hypothesis is by simulating human-like behavior, a bot gets more attention and exerts more influence on its followers.

Keywords

Chatbot Social agent Fake news Fact check Social media 

Notes

Acknowledgments

This work has been supported by national funds through FCT - Fundação para a Ciência e Tecnologia within the Project Scope: UID/CEC/00319/2019.

References

  1. 1.
    Aiello, L.M., Deplano, M., Schifanella, R., Ruffo, G.: People are strange when you’re a stranger: impact and influence of bots on social networks. In: Sixth International AAAI Conference on Weblogs and Social Media (2012)Google Scholar
  2. 2.
    Bickmore, T.W., Picard, R.W.: Establishing and maintaining long-term human-computer relationships. ACM Trans. Comput.-Hum. Interact. 12, 293–327 (2005)Google Scholar
  3. 3.
    Brooker, P.: My unexpectedly militant bots: a case for programming-as-social-science. Sociol. Rev. 67(6), 1228–1248 (2019)Google Scholar
  4. 4.
    Ferrara, E., Varol, O., Davis, C., et al.: The rise of social bots. CACM 59, 7 (2016)CrossRefGoogle Scholar
  5. 5.
    Jingling, Z., Huiyun, Z., Baojiang, C.: Sentence similarity based on semantic vector model. In: Proceedings of the 2014 Ninth International Conference on P2P, Parallel, Grid, Cloud and Internet Computing (Washington, DC, USA), 3PGCIC 2014, pp. 499–503. IEEE Computer Society (2014)Google Scholar
  6. 6.
    Lucas, G.M., Boberg, J., Traum, D., Artstein, R., Gratch, J., Gainer, A., Johnson, E., Leuski, A., Nakano, M.: Culture, errors, and rapport-building dialogue in social agents. In: Proceedings of the 18th International Conference on Intelligent Virtual Agents (New York, NY, USA), IVA 2018, pp. 51–58. Association for Computing Machinery (2018)Google Scholar
  7. 7.
    Luceri, L., Deb, A., Badawy, A., Ferrara, E.: Red bots do it better: comparative analysis of social bot partisan behavior. In: Companion Proceedings of the 2019 World Wide Web Conference, pp. 1007–1012. ACM (2019)Google Scholar
  8. 8.
    Marcondes, F.S., Almeida, J.J., Novais, P.: A short survey on chatbot technology: failure in raising the state of the art. In: International Symposium on Distributed Computing and Artificial Intelligence, pp. 28–36. Springer, Heidelberg (2019)Google Scholar
  9. 9.
    Mori, M., MacDorman, K.F., Kageki, N.: The uncanny valley [from the field]. IEEE Robot. Autom. Mag. 19(2), 98–100 (2012)CrossRefGoogle Scholar
  10. 10.
    Nass, C., Steuer, J., Tauber, E.R.: Computers are social actors. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 72–78. ACM (1994)Google Scholar
  11. 11.
    Shao, C., Ciampaglia, G.L., Varol, O., Yang, K.-C., Flammini, A., Menczer, F.: The spread of low-credibility content by social bots. Nat. Commun. 9(1), 4787 (2018)CrossRefGoogle Scholar
  12. 12.
    Shao, C., Hui, P., Wang, L., Jiang, X., Flammini, A., Menczer, F., Ciampaglia, G.: Anatomy of an online misinformation network. PLoS One 13, e0196087 (2018)Google Scholar
  13. 13.
    Vo, N., Lee, K.: The rise of guardians: fact-checking URL recommendation to combat fake news. In: The 41st International ACM SIGIR Conference on Research (New York, NY, USA), SIGIR 2018, vol. 38, pp. 275–284. ACM (2018)Google Scholar
  14. 14.
    Vosoughi, S., Roy, D., Aral, S.: The spread of true and false news online. Science 359(6380), 1146–1151 (2018)CrossRefGoogle Scholar
  15. 15.
    Waller, J.: Strategic Influence: Public Diplomacy, Counterpropaganda, and Political Warfare. Institute of World Politics Press (2009)Google Scholar
  16. 16.
    Wolf, M.J., Miller, K., Grodzinsky, F.S.: Why we should have seen that coming: comments on microsoft’s tay “experiment,” and wider implications. SIGCAS Comput. Soc. 47(3), 54–64 (2017)CrossRefGoogle Scholar
  17. 17.
    Woolley, S., Howard, P.: Computational Propaganda: Political Parties, Politicians, and Political Manipulation on Social Media. Oxford Studies in Digital Politics. Oxford University Press (2018)Google Scholar

Copyright information

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland AG 2020

Authors and Affiliations

  • Francisco S. Marcondes
    • 1
    Email author
  • José João Almeida
    • 1
  • Dalila Durães
    • 1
    • 2
  • Paulo Novais
    • 1
  1. 1.ALGORITMI Centre—Department of InformaticsUniversity of MinhoBragaPortugal
  2. 2.CIICESI, ESTG, Polytechnic Institute of PortoFelgueirasPortugal

Personalised recommendations