Advertisement

Detection of Causative Attack and Prevention Using CAP Algorithm on Training Datasets

  • D. Suja MaryEmail author
  • M. Suriakala
Conference paper
Part of the Lecture Notes in Networks and Systems book series (LNNS, volume 98)

Abstract

Machine learning is the scientific study of algorithms, which has been widely used for making automated decisions. The attackers change the training datasets by using their knowledge, it cause impulses to implement the malicious results and models. Causative attack in adversarial machine learning explores certain security threat against carefully executed poisonous data points into the training datasets. This type of attacks are caused when the malicious data gets injected on training data to efficiently train the model. Defense techniques have leveraged robust training datasets and prevents accuracy on evaluating the machine learning algorithms. The novel algorithm CAP is explained to substitute trusted data instead of the untrusted data, which improves the reliability of machine learning algorithms.

Keywords

Causative attack Poisoning attack Causative Attack Protection (CAP) algorithm Adversary 

References

  1. 1.
    Shi, Y., Sagduyu, Y.E.: Evasion and causative attacks with adversarial deep learning. In: Milcom 2017 Track 3 - Cyber Security and Trusted Computing (2017)Google Scholar
  2. 2.
    Aman, M.N., Chua, K.C., Sikdar, B.: Secure data provenance for the Internet of Things. In: Proceedings of the 3rd ACM International Workshop on IoT Privacy, Trust, and Security (IoTPTS 2017), pp. 11–14. ACM, New York (2017)Google Scholar
  3. 3.
    Baracaldo, N., Chen, B., Ludwig, H., Safavi, J.A.: Mitigating poisoning attacks on machine learning models: a data provenance based approach. In: Defense Against Poisoning AISec 2017, 3 November 2017, Dallas (2017)Google Scholar
  4. 4.
    Liu, Q., Li, P., Zhao, W., Cai, W., Yu, S., Leung, V.C.M.: A survey on security threats and defensive techniques of machine learning: a data driven view, vol. 4, pp. 2169–3536. IEEE (2018)Google Scholar
  5. 5.
    Burkard, C., Lagesse, B.: Analysis of causative attacks against SVMs learning from data streams. In: IWSPA 2017, 24 March 2017, Scottsdale (2017)Google Scholar
  6. 6.
    Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., Fergus, R.: Intriguing properties of neural networksGoogle Scholar
  7. 7.
    Mozaffari-Kermani, M., Sur-Kolay, S., Raghunathan, A.: Systematic poisoning attacks on and defenses for machine learning in healthcare. IEEE J. Biomed. Health Inf. 19(6), 1893–1905 (2013)CrossRefGoogle Scholar
  8. 8.
    Rouse, J.M.: Machine learning definition. http://whatis.techtarget.com/definition/machine-learning
  9. 9.
    L’heureux, A., Grolinger, K., Elyamany, H.F., Capretz, M.A.M.: Machine learning with big data: challenges and approaches, vol. 5. IEEE Access (2017)Google Scholar
  10. 10.
    Pi, L., Lu, Z., Sagduyu, Y., Chen, S.: Defending active learning against adversarial inputs in automated document classification. In: IEEE Global Conference on Signal and Information Processing (GlobalSIP), December 2016Google Scholar
  11. 11.
    Papernot, N., McDaniel, P., Jha, S., Fredrikson, M., Celik, Z.B., Swami, A.: The limitations of deep learning in adversarial settings. In: 1st IEEE European Symposium on Security & Privacy, Saarbrucken, Germany. IEEE (2016)Google Scholar
  12. 12.
    Asharani, V., Veerappa, B.N., Rafi, M.: Security evaluation of pattern classifiers in adversarial environmments. IJCSMC, 4(4), 768–774 (2015)Google Scholar
  13. 13.
    Fawzi, A., Fawzi, O., Frossard, P.: Analysis of classifiers’ robustness to adversarial perturbations. Mach. Learn. 107, 481–508 (2018).  https://doi.org/10.1007/s10994-017-5663-3MathSciNetCrossRefzbMATHGoogle Scholar
  14. 14.
    Gnana Pavani, P., Venkatesh, K., Rajesh, V.: Security evaluation of pattern classifiers under attack. IJDCST V-5, I-5, SW-39 (2017)Google Scholar
  15. 15.
    Biggio, B., Fumera, G., Fabio Roli, F.: Security evaluation of pattern classifiers under attack. IEEE Trans. Knowl. Data Eng. 26(4), 984–996 (2014)CrossRefGoogle Scholar
  16. 16.
    Anderson, H.S., Kharkar, A., Filar, B.: Evading machine learning malware detection. In: Black Hat, USA, July 2017, pp. 22–27, Las Vegas (2017)Google Scholar
  17. 17.
    Baracaldo, N., Chen, b., Ludwig, H., Safavi, A., Zhang, R.: Detecting poisoning attacks on machine learning in IoT environments. In: IEEE International Congress on Internet of Things (2018)Google Scholar
  18. 18.
    Barreno, M., Nelson, B., Joseph, A.D., Tygar, J.D.: The security of machine learning. Mach. Learn. 81, 121–148 (2010)MathSciNetCrossRefGoogle Scholar
  19. 19.
    Li, H., Chan, P.P.K.: An improved reject on negative impact defense. Springer, Heidelberg (2014)Google Scholar
  20. 20.
    Lin, X., Chan, P.P.K.: Causative attack to incremental support vector machine. In: International Conference on Machine Learning and Cybernetics, Lanzhou, 13–16 July 2014Google Scholar
  21. 21.
    Biggio, B., Corona, I., Nelson, B., Rubinstein, B.I.P., Maiorca, D., Fumera, G., Giacinto, G., Roli, F.: Security evaluation of support vector machines in adversarial environments. In: Support Vector Machines Applications (2014)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Department of Computer Applications, J.H.A Agarsen CollegeUniversity of MadrasChennaiIndia
  2. 2.Department of Computer ScienceGovernment Arts College for MenChennaiIndia

Personalised recommendations