Abstract
Machine learning algorithms found their way into a surprisingly wide range of applications, providing utility and allowing for insights gathered from data in a way never before possible. Those tools, however, have not been developed with security in mind. A deployed algorithm can meet a multitude of risks in the real world. This work explores one of those risks - the feasibility of an exploratory attack geared towards stealing an algorithm used in the cybersecurity domain. The process we have used is thoroughly explained and the results are promising.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Papernot, N., McDaniel, P., Sinha, A., Wellman, M.P.: SoK: security and privacy in machine learning. In: 2018 IEEE European Symposium on Security and Privacy (EuroS P), pp. 399–414, April 2018
Ateniese, G., Felici, G., Mancini, L.V., Spognardi, A., Villani, A., Vitali, D.: Hacking smart machines with smarter ones: how to extract meaningful data from machine learning classifiers. CoRR, abs/1306.4447 (2013)
Chakraborty, A., Alam, M., Dey, V., Chattopadhyay, A., Mukhopadhyay, D.: Adversarial attacks and defences: a survey. CoRR, abs/1810.00069 (2018)
Liao, X., Ding, L., Wang, Y.: Secure machine learning, a brief overview. In: 2011 Fifth International Conference on Secure Software Integration and Reliability Improvement - Companion, pp. 26–29, June 2011
Shi, Y., Sagduyu, Y., Grushin, A.: How to steal a machine learning classifier with deep learning. In: 2017 IEEE International Symposium on Technologies for Homeland Security (HST), pp. 1–5, April 2017
Cachin, C., Camenisch, J.L. (eds.): EUROCRYPT 2004. LNCS, vol. 3027. Springer, Heidelberg (2004). https://doi.org/10.1007/b97182
da Silva, I.N., Hernane Spatti, D., Andrade Flauzino, R., Liboni, L.H.B., dos Reis Alves, S.F.: Artificial Neural Networks A Practical Course. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-43162-8
Shi, Y., Sagduyu, Y.E., Davaslioglu, K., Li, J.H.: Generative adversarial networks for black-box API attacks with limited training data. CoRR, abs/1901.09113 (2019)
Quiring, E., Arp, D., Rieck, K.: Forgotten siblings: unifying attacks on machine learning and digital watermarking. In: 2018 IEEE European Symposium on Security and Privacy (EuroS P), pp. 488–502, April 2018
NSL-KDD dataset
Aggarwal, C.C.: Neural Networks and Deep Learning A Textbook (2018)
Abadi, M., et al.: TensorFlow: large-scale machine learning on heterogeneous systems (2015). Software tensorflow.org
Chollet, F., et al.: Keras (2015). https://github.com/fchollet/keras
Acknowledgments
This work is funded under the SPARTA project, which has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 830892.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Choraś, M., Pawlicki, M., Kozik, R. (2019). The Feasibility of Deep Learning Use for Adversarial Model Extraction in the Cybersecurity Domain. In: Yin, H., Camacho, D., Tino, P., Tallón-Ballesteros, A., Menezes, R., Allmendinger, R. (eds) Intelligent Data Engineering and Automated Learning – IDEAL 2019. IDEAL 2019. Lecture Notes in Computer Science(), vol 11872. Springer, Cham. https://doi.org/10.1007/978-3-030-33617-2_36
Download citation
DOI: https://doi.org/10.1007/978-3-030-33617-2_36
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-33616-5
Online ISBN: 978-3-030-33617-2
eBook Packages: Computer ScienceComputer Science (R0)