Skip to main content

Detection of Causative Attack and Prevention Using CAP Algorithm on Training Datasets

  • Conference paper
  • First Online:
Inventive Computation Technologies (ICICIT 2019)

Part of the book series: Lecture Notes in Networks and Systems ((LNNS,volume 98))

Included in the following conference series:

  • 867 Accesses

Abstract

Machine learning is the scientific study of algorithms, which has been widely used for making automated decisions. The attackers change the training datasets by using their knowledge, it cause impulses to implement the malicious results and models. Causative attack in adversarial machine learning explores certain security threat against carefully executed poisonous data points into the training datasets. This type of attacks are caused when the malicious data gets injected on training data to efficiently train the model. Defense techniques have leveraged robust training datasets and prevents accuracy on evaluating the machine learning algorithms. The novel algorithm CAP is explained to substitute trusted data instead of the untrusted data, which improves the reliability of machine learning algorithms.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 259.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 329.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 329.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Shi, Y., Sagduyu, Y.E.: Evasion and causative attacks with adversarial deep learning. In: Milcom 2017 Track 3 - Cyber Security and Trusted Computing (2017)

    Google Scholar 

  2. Aman, M.N., Chua, K.C., Sikdar, B.: Secure data provenance for the Internet of Things. In: Proceedings of the 3rd ACM International Workshop on IoT Privacy, Trust, and Security (IoTPTS 2017), pp. 11–14. ACM, New York (2017)

    Google Scholar 

  3. Baracaldo, N., Chen, B., Ludwig, H., Safavi, J.A.: Mitigating poisoning attacks on machine learning models: a data provenance based approach. In: Defense Against Poisoning AISec 2017, 3 November 2017, Dallas (2017)

    Google Scholar 

  4. Liu, Q., Li, P., Zhao, W., Cai, W., Yu, S., Leung, V.C.M.: A survey on security threats and defensive techniques of machine learning: a data driven view, vol. 4, pp. 2169–3536. IEEE (2018)

    Google Scholar 

  5. Burkard, C., Lagesse, B.: Analysis of causative attacks against SVMs learning from data streams. In: IWSPA 2017, 24 March 2017, Scottsdale (2017)

    Google Scholar 

  6. Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., Fergus, R.: Intriguing properties of neural networks

    Google Scholar 

  7. Mozaffari-Kermani, M., Sur-Kolay, S., Raghunathan, A.: Systematic poisoning attacks on and defenses for machine learning in healthcare. IEEE J. Biomed. Health Inf. 19(6), 1893–1905 (2013)

    Article  Google Scholar 

  8. Rouse, J.M.: Machine learning definition. http://whatis.techtarget.com/definition/machine-learning

  9. L’heureux, A., Grolinger, K., Elyamany, H.F., Capretz, M.A.M.: Machine learning with big data: challenges and approaches, vol. 5. IEEE Access (2017)

    Google Scholar 

  10. Pi, L., Lu, Z., Sagduyu, Y., Chen, S.: Defending active learning against adversarial inputs in automated document classification. In: IEEE Global Conference on Signal and Information Processing (GlobalSIP), December 2016

    Google Scholar 

  11. Papernot, N., McDaniel, P., Jha, S., Fredrikson, M., Celik, Z.B., Swami, A.: The limitations of deep learning in adversarial settings. In: 1st IEEE European Symposium on Security & Privacy, Saarbrucken, Germany. IEEE (2016)

    Google Scholar 

  12. Asharani, V., Veerappa, B.N., Rafi, M.: Security evaluation of pattern classifiers in adversarial environmments. IJCSMC, 4(4), 768–774 (2015)

    Google Scholar 

  13. Fawzi, A., Fawzi, O., Frossard, P.: Analysis of classifiers’ robustness to adversarial perturbations. Mach. Learn. 107, 481–508 (2018). https://doi.org/10.1007/s10994-017-5663-3

    Article  MathSciNet  MATH  Google Scholar 

  14. Gnana Pavani, P., Venkatesh, K., Rajesh, V.: Security evaluation of pattern classifiers under attack. IJDCST V-5, I-5, SW-39 (2017)

    Google Scholar 

  15. Biggio, B., Fumera, G., Fabio Roli, F.: Security evaluation of pattern classifiers under attack. IEEE Trans. Knowl. Data Eng. 26(4), 984–996 (2014)

    Article  Google Scholar 

  16. Anderson, H.S., Kharkar, A., Filar, B.: Evading machine learning malware detection. In: Black Hat, USA, July 2017, pp. 22–27, Las Vegas (2017)

    Google Scholar 

  17. Baracaldo, N., Chen, b., Ludwig, H., Safavi, A., Zhang, R.: Detecting poisoning attacks on machine learning in IoT environments. In: IEEE International Congress on Internet of Things (2018)

    Google Scholar 

  18. Barreno, M., Nelson, B., Joseph, A.D., Tygar, J.D.: The security of machine learning. Mach. Learn. 81, 121–148 (2010)

    Article  MathSciNet  Google Scholar 

  19. Li, H., Chan, P.P.K.: An improved reject on negative impact defense. Springer, Heidelberg (2014)

    Google Scholar 

  20. Lin, X., Chan, P.P.K.: Causative attack to incremental support vector machine. In: International Conference on Machine Learning and Cybernetics, Lanzhou, 13–16 July 2014

    Google Scholar 

  21. Biggio, B., Corona, I., Nelson, B., Rubinstein, B.I.P., Maiorca, D., Fumera, G., Giacinto, G., Roli, F.: Security evaluation of support vector machines in adversarial environments. In: Support Vector Machines Applications (2014)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to D. Suja Mary .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Suja Mary, D., Suriakala, M. (2020). Detection of Causative Attack and Prevention Using CAP Algorithm on Training Datasets. In: Smys, S., Bestak, R., Rocha, Á. (eds) Inventive Computation Technologies. ICICIT 2019. Lecture Notes in Networks and Systems, vol 98. Springer, Cham. https://doi.org/10.1007/978-3-030-33846-6_48

Download citation

Publish with us

Policies and ethics