Skip to main content

Fighting Adversarial Attacks on Online Abusive Language Moderation

  • Conference paper
  • First Online:
Applied Computer Sciences in Engineering (WEA 2018)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 915))

Included in the following conference series:

Abstract

Lack of moderation in online conversations may result in personal aggression, harassment or cyberbullying. Such kind of hostility is usually expressed by using profanity or abusive language. On the basis of this assumption, recently Google has developed a machine-learning model to detect hostility within a comment. The model is able to assess to what extent abusive language is poisoning a conversation, obtaining a “toxicity” score for the comment. Unfortunately, it has been suggested that such a toxicity model can be deceived by adversarial attacks that manipulate the text sequence of the abusive language. In this paper we aim to fight this anomaly; firstly we characterise two types of adversarial attacks, one using obfuscation and the other using polarity transformations. Then, we propose a two–stage approach to disarm such attacks by coupling a text deobfuscation method and the toxicity scoring model. The approach was validated on a dataset of approximately 24000 distorted comments showing that it is feasible to restore the toxicity score of the adversarial variants. We anticipate that combining machine learning and text pattern recognition methods operating on different layers of linguistic features, will help to foster aggression–safe online conversations despite the adversary challenges inherent to the versatile nature of written language.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Dale, R.: NLP in a post-truth world. Nat. Lang. Eng. 23(2), 319–324 (2017)

    Article  MathSciNet  Google Scholar 

  2. Hosseinmardi, H.: Survey of computational methods in cyberbullying research. In: Proceedings of the First International Workshop on Computational Methods for CyberSafety. ACM, New York (2016)

    Google Scholar 

  3. Burnap, P., Williams, M.L.: Us and them: identifying cyber hate on Twitter across multiple protected characteristics. EPJ Data Sci. 5(1), 11 (2016)

    Article  Google Scholar 

  4. Nobata, C., Tetreault, J., Thomas, A., Mehdad, Y., Chang, Y.: Abusive language detection in online user content. In: Proceedings of the 25th International Conference on World Wide Web (2016)

    Google Scholar 

  5. Wulczyn, E., Thain, N., Dixon, L.: Ex machina: personal attacks seen at scale. arXiv preprint arXiv:1610.08914, February 2017

  6. Hosseini, H., Kannan, S., Zhang, B., Poovendran, R.: Deceiving google’s perspective API built for detecting toxic comments. arXiv preprint arXiv:1702.08138, February 2017

  7. Rojas-Galeano, S.: On obstructing obscenity obfuscation. ACM Trans. Web 11(2), 12:1–12:24 (2017). https://doi.org/10.1145/3032963

    Article  Google Scholar 

  8. Laskov, P., Lippmann, R.: Machine learning in adversarial environments. Mach. Learn. 81(2), 115–119 (2010)

    Article  Google Scholar 

  9. Samanta, S., Mehta, S.: Towards crafting text adversarial samples. arXiv preprint arXiv:1707.02812 (2017)

  10. PerspectiveAPI: Jigsaw (2017). https://www.perspectiveapi.com. Accessed 26 May 2018

  11. TextPatrolAPI: TPLabs (2017). https://api.textpatrol.tk. Accessed 26 May 2018

  12. Stone, T.E., McMillan, M., Hazelton, M.: Back to swear one: a review of English language literature on swearing and cursing in western health settings. Aggress. Violent Behav. 25, 65–74 (2015)

    Article  Google Scholar 

  13. Hosseinmardi, H., Mattson, S.A., Ibn Rafiq, R., Han, R., Lv, Q., Mishra, S.: Analyzing labeled cyberbullying incidents on the instagram social network. In: Liu, T.Y., Scollon, C., Zhu, W. (eds.) Social Informatics. LNCS, vol. 9471, pp. 49–66. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-27433-1_4

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sergio Rojas-Galeano .

Editor information

Editors and Affiliations

Appendix. Original Comments

Appendix. Original Comments

Table 3 shows the original aggressive comments extracted from the GP Website [10] with their toxicity scores obtained at the beginning of this study (notice that since GP is continuously refining its model by learning from new examples, these scores may have varied over time). The terms triggering toxicity are indicated in bold type and were found as explained in Sect. 2.4.

Table 3. Original aggressive comments extracted from [10].

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Rodriguez, N., Rojas-Galeano, S. (2018). Fighting Adversarial Attacks on Online Abusive Language Moderation. In: Figueroa-García, J., López-Santana, E., Rodriguez-Molano, J. (eds) Applied Computer Sciences in Engineering. WEA 2018. Communications in Computer and Information Science, vol 915. Springer, Cham. https://doi.org/10.1007/978-3-030-00350-0_40

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-00350-0_40

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-00349-4

  • Online ISBN: 978-3-030-00350-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics