Using Trust Model for Detecting Malicious Activities in Twitter

  • Mohini Agarwal
  • Bin Zhou
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8393)


Online social networks such as Twitter have become a major type of information sources in recent years. However, this new public social media provides new gateways for malicious users to achieve various malicious purposes. In this paper, we introduce an extended trust model for detecting malicious activities in online social networks. The major insight is to conduct a trust propagation process over a novel heterogeneous social graph which is able to model different social activities. We develop two trustworthiness measures and evaluate their performance of detecting malicious activities using a real Twitter data set. The results revealed that the F-1 measure of detecting malicious activities in Twitter can achieve higher than 0.9 using our proposed method.


cybercrime Twitter heterogeneous social graph trust model 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Alfarez Abdul-Rahman, S.H.: A distributed trust model (1997)Google Scholar
  2. 2.
    Benevenuto, F., Rodrigues, T., Cha, M., Almeida, V.: Characterizing user behavior in online social networks. In: Proceedings of the 9th ACM SIGCOMM Conference on Internet Measurement Conference, pp. 49–62. ACM, New York (2009)CrossRefGoogle Scholar
  3. 3.
    Cormack, G.V.: Email spam filtering: A systematic review. Foundations and Trends in Information Retrieval 1(4), 335–455 (2008)CrossRefMathSciNetGoogle Scholar
  4. 4.
    Duric, A., Song, F.: Feature selection for sentiment analysis based on content and syntax models. In: Proceedings of the 2nd Workshop on Computational Approaches to Subjectivity and Sentiment Analysis, Stroudsburg, USA, pp. 96–103 (2011)Google Scholar
  5. 5.
    Gao, H., Chen, Y., Lee, K., Palsetia, D., Choudhary, A.: Poster: online spam filtering in social networks. In: Proceedings of the 18th ACM Conference on Computer and Communications Security, pp. 769–772. ACM, New York (2011)Google Scholar
  6. 6.
    Gao, H., Hu, J., Wilson, C., Li, Z., Chen, Y., Zhao, B.Y.: Detecting and characterizing social spam campaigns. In: Proceedings of the 10th ACM SIGCOMM Conference on Internet Measurement, pp. 35–47. ACM, New York (2010)Google Scholar
  7. 7.
    Grant, R.: Social media around the world 2012 (October 2012)Google Scholar
  8. 8.
    Grier, C., Thomas, K., Paxson, V., Zhang, M.: @spam: the underground on 140 characters or less. In: Proceedings of the 17th ACM Conference on Computer and Communications Security, pp. 27–37. ACM, New York (2010)CrossRefGoogle Scholar
  9. 9.
    Hassan, A., Qazvinian, V., Radev, D.: What’s with the attitude?: identifying sentences with attitude in online discussions. In: Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, Stroudsburg, USA, pp. 1245–1255 (2010)Google Scholar
  10. 10.
    Haveliwala, T.: Topic-sensitive pagerank. In: Proceedings of the 11st International World Wide Web Conference (WWW 2002), Honolulu, Hawaii, pp. 784–796. ACM (2002)Google Scholar
  11. 11.
    Heymann, P., Koutrika, G., Garcia-Molina, H.: Fighting spam on social web sites: A survey of approaches and future challenges 11(6), 36–45 (2007)Google Scholar
  12. 12.
    Irani, D., Webb, S., Pu, C.: Study of static classification of social spam profiles in myspace. In: Proceedings of the Fourth International Conference on Weblogs and Social Media (ICWSM 2010). The AAAI Press (2010)Google Scholar
  13. 13.
    Jin, X., Lin, C.X., Luo, J., Han, J.: Socialspamguard: A data mining-based spam detection system for social media networks. PVLDB 4(12), 1458–1461 (2011)Google Scholar
  14. 14.
    Lee, K., Caverlee, J., Webb, S.: The social honeypot project: protecting online communities from spammers. In: Proceedings of the 19th International Conference on World Wide Web, WWW 2010, pp. 1139–1140. ACM, New York (2010)CrossRefGoogle Scholar
  15. 15.
    Qazvinian, V., Rosengren, E., Radev, D.R., Mei, Q.: Rumor has it: identifying misinformation in microblogs. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing, Stroudsburg, USA, pp. 1589–1599 (2011)Google Scholar
  16. 16.
    Ratkiewicz, J., Conover, M., Meiss, M., Gonçalves, B., Patil, S., Flammini, A., Menczer, F.: Truthy: mapping the spread of astroturf in microblog streams. In: Proceedings of the 20th International Conference Companion on World Wide Web, pp. 249–252. ACM, New York (2011)CrossRefGoogle Scholar
  17. 17.
    Stringhini, G., Kruegel, C., Vigna, G.: Detecting spammers on social networks. In: Annual Computer Security Applications Conference (2010)Google Scholar
  18. 18.
    Thomas, K., Grier, C., Song, D., Paxson, V.: Suspended accounts in retrospect: an analysis of twitter spam. In: Proceedings of the 2011 ACM SIGCOMM Conference on Internet Measurement Conference, pp. 243–258. ACM, New York (2011)CrossRefGoogle Scholar
  19. 19.
  20. 20.
  21. 21.
    Xie, Y., Yu, F., Achan, K., Panigrahy, R., Hulten, G., Osipkov, I.: Spamming botnets: signatures and characteristics. In: Proceedings of the ACM SIGCOMM 2008 Conference on Data Communication, pp. 171–182. ACM, New York (2008)CrossRefGoogle Scholar
  22. 22.
    Yang, F., Liu, Y., Yu, X., Yang, M.: Automatic detection of rumor on sina weibo. In: Proceedings of the ACM SIGKDD Workshop on Mining Data Semantics, pp. 13:1–13:7. ACM, New York (2012)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  • Mohini Agarwal
    • 1
  • Bin Zhou
    • 1
  1. 1.Department of Information SystemsUniversity of MarylandBaltimoreUSA

Personalised recommendations