Abstract
Machine learning is the scientific study of algorithms, which has been widely used for making automated decisions. The attackers change the training datasets by using their knowledge, it cause impulses to implement the malicious results and models. Causative attack in adversarial machine learning explores certain security threat against carefully executed poisonous data points into the training datasets. This type of attacks are caused when the malicious data gets injected on training data to efficiently train the model. Defense techniques have leveraged robust training datasets and prevents accuracy on evaluating the machine learning algorithms. The novel algorithm CAP is explained to substitute trusted data instead of the untrusted data, which improves the reliability of machine learning algorithms.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Shi, Y., Sagduyu, Y.E.: Evasion and causative attacks with adversarial deep learning. In: Milcom 2017 Track 3 - Cyber Security and Trusted Computing (2017)
Aman, M.N., Chua, K.C., Sikdar, B.: Secure data provenance for the Internet of Things. In: Proceedings of the 3rd ACM International Workshop on IoT Privacy, Trust, and Security (IoTPTS 2017), pp. 11–14. ACM, New York (2017)
Baracaldo, N., Chen, B., Ludwig, H., Safavi, J.A.: Mitigating poisoning attacks on machine learning models: a data provenance based approach. In: Defense Against Poisoning AISec 2017, 3 November 2017, Dallas (2017)
Liu, Q., Li, P., Zhao, W., Cai, W., Yu, S., Leung, V.C.M.: A survey on security threats and defensive techniques of machine learning: a data driven view, vol. 4, pp. 2169–3536. IEEE (2018)
Burkard, C., Lagesse, B.: Analysis of causative attacks against SVMs learning from data streams. In: IWSPA 2017, 24 March 2017, Scottsdale (2017)
Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., Fergus, R.: Intriguing properties of neural networks
Mozaffari-Kermani, M., Sur-Kolay, S., Raghunathan, A.: Systematic poisoning attacks on and defenses for machine learning in healthcare. IEEE J. Biomed. Health Inf. 19(6), 1893–1905 (2013)
Rouse, J.M.: Machine learning definition. http://whatis.techtarget.com/definition/machine-learning
L’heureux, A., Grolinger, K., Elyamany, H.F., Capretz, M.A.M.: Machine learning with big data: challenges and approaches, vol. 5. IEEE Access (2017)
Pi, L., Lu, Z., Sagduyu, Y., Chen, S.: Defending active learning against adversarial inputs in automated document classification. In: IEEE Global Conference on Signal and Information Processing (GlobalSIP), December 2016
Papernot, N., McDaniel, P., Jha, S., Fredrikson, M., Celik, Z.B., Swami, A.: The limitations of deep learning in adversarial settings. In: 1st IEEE European Symposium on Security & Privacy, Saarbrucken, Germany. IEEE (2016)
Asharani, V., Veerappa, B.N., Rafi, M.: Security evaluation of pattern classifiers in adversarial environmments. IJCSMC, 4(4), 768–774 (2015)
Fawzi, A., Fawzi, O., Frossard, P.: Analysis of classifiers’ robustness to adversarial perturbations. Mach. Learn. 107, 481–508 (2018). https://doi.org/10.1007/s10994-017-5663-3
Gnana Pavani, P., Venkatesh, K., Rajesh, V.: Security evaluation of pattern classifiers under attack. IJDCST V-5, I-5, SW-39 (2017)
Biggio, B., Fumera, G., Fabio Roli, F.: Security evaluation of pattern classifiers under attack. IEEE Trans. Knowl. Data Eng. 26(4), 984–996 (2014)
Anderson, H.S., Kharkar, A., Filar, B.: Evading machine learning malware detection. In: Black Hat, USA, July 2017, pp. 22–27, Las Vegas (2017)
Baracaldo, N., Chen, b., Ludwig, H., Safavi, A., Zhang, R.: Detecting poisoning attacks on machine learning in IoT environments. In: IEEE International Congress on Internet of Things (2018)
Barreno, M., Nelson, B., Joseph, A.D., Tygar, J.D.: The security of machine learning. Mach. Learn. 81, 121–148 (2010)
Li, H., Chan, P.P.K.: An improved reject on negative impact defense. Springer, Heidelberg (2014)
Lin, X., Chan, P.P.K.: Causative attack to incremental support vector machine. In: International Conference on Machine Learning and Cybernetics, Lanzhou, 13–16 July 2014
Biggio, B., Corona, I., Nelson, B., Rubinstein, B.I.P., Maiorca, D., Fumera, G., Giacinto, G., Roli, F.: Security evaluation of support vector machines in adversarial environments. In: Support Vector Machines Applications (2014)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Suja Mary, D., Suriakala, M. (2020). Detection of Causative Attack and Prevention Using CAP Algorithm on Training Datasets. In: Smys, S., Bestak, R., Rocha, Á. (eds) Inventive Computation Technologies. ICICIT 2019. Lecture Notes in Networks and Systems, vol 98. Springer, Cham. https://doi.org/10.1007/978-3-030-33846-6_48
Download citation
DOI: https://doi.org/10.1007/978-3-030-33846-6_48
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-33845-9
Online ISBN: 978-3-030-33846-6
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)