Advertisement

Cyber security meets artificial intelligence: a survey

  • Jian-hua LiEmail author
Review
  • 24 Downloads

Abstract

There is a wide range of interdisciplinary intersections between cyber security and artificial intelligence (AI). On one hand, AI technologies, such as deep learning, can be introduced into cyber security to construct smart models for implementing malware classification and intrusion detection and threating intelligence sensing. On the other hand, AI models will face various cyber threats, which will disturb their sample, learning, and decisions. Thus, AI models need specific cyber security defense and protection technologies to combat adversarial machine learning, preserve privacy in machine learning, secure federated learning, etc. Based on the above two aspects, we review the intersection of AI and cyber security. First, we summarize existing research efforts in terms of combating cyber attacks using AI, including adopting traditional machine learning methods and existing deep learning solutions. Then, we analyze the counterattacks from which AI itself may suffer, dissect their characteristics, and classify the corresponding defense methods. Finally, from the aspects of constructing encrypted neural network and realizing a secure federated deep learning, we expatiate the existing research on how to build a secure AI system.

Key words

Cyber security Artificial intelligence (AI) Attack detection Defensive techniques 

CLC number

TP309 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Abeshu A, Chilamkurti N, 2018. Deep learning: the frontier for distributed attack detection in fog–to–things computing. IEEE Commun Mag, 56(2):169–175.  https://doi.org/10.1109/MCOM.2018.1700332 CrossRefGoogle Scholar
  2. Akhtar N, Mian A, 2018. Threat of adversarial attacks on deep learning in computer vision: a survey. IEEE Access, 6:14410–14430.  https://doi.org/10.1109/ACCESS.2018.2807385 CrossRefGoogle Scholar
  3. Akhtar N, Liu J, Mian A, 2018. Defense against universal adversarial perturbations. IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.3389–3398.  https://doi.org/10.1109/CVPR.2018.00357 Google Scholar
  4. Arulkumaran K, Deisenroth MP, Brundage M, et al., 2017. Deep reinforcement learning: a brief survey. IEEE Signal Process Mag, 34(6):26–38.  https://doi.org/10.1109/MSP.2017.2743240 CrossRefGoogle Scholar
  5. Aygün RC, Yavuz AG, 2017. A stochastic data discrimination based autoencoder approach for network anomaly detection. Proc 5th Signal Processing and Communications Applications Conf, p.1–4.  https://doi.org/10.1109/SIU.2017.7960410 CrossRefGoogle Scholar
  6. Bonawitz K, Ivanov V, Kreuter B, et al., 2017. Practical secure aggregation for privacy–preserving machine learning. Proc ACM SIGSAC Conf on Computer and Communications Security, p.1175–1191.  https://doi.org/10.1145/3133956.3133982 CrossRefGoogle Scholar
  7. Bost R, Popa RA, Tu S, et al., 2015. Machine learning classification over encrypted data. Network and Distributed System Security Symp, p.331–364.  https://doi.org/10.14722/ndss.2015.23241 CrossRefGoogle Scholar
  8. Chowdhury MMU, Hammond F, Konowicz G, et al., 2017. A few–shot deep learning approach for improved intrusion detection. Proc 8th Annual Ubiquitous Computing, Electronics and Mobile Communication Conf, p.456–462.  https://doi.org/10.1109/UEMCON.2017.8249084 CrossRefGoogle Scholar
  9. Cisse M, Adi Y, Neverova N, et al., 2017. Houdini: fooling deep structured prediction models. https://doi.org/arxiv.org/abs/1707.05373 Google Scholar
  10. Cubuk ED, Zoph B, Schoenholz SS, et al., 2017. Intriguing properties of adversarial examples. https://doi.org/arxiv.org/abs/1711.02846 Google Scholar
  11. Dada EG, 2017. A hybridized SVM–kNN–pdAPSO approach to intrusion detection system. Faculty Seminar Series, p.1–8.Google Scholar
  12. Deng L, Yu D, 2014. Deep learning: methods and applications. Found Trend Sig Process, 7(3–4): 197–387.  https://doi.org/10.1561/2000000039 MathSciNetCrossRefzbMATHGoogle Scholar
  13. Feinman R, Curtin RR, Shintre S, et al., 2017. Detecting adversarial samples from artifacts. https://doi.org/arxiv.org/abs/1703.00410 Google Scholar
  14. Gao W, Morris T, Reaves B, et al., 2010. On SCADA control system command and response injection and intrusion detection. eCrime Researchers Summit, p.1–9.  https://doi.org/10.1109/ecrime.2010.5706699 Google Scholar
  15. Gebhart T, Schrater P, 2017. Adversary detection in neural networks via persistent homology. https://doi.org/arxiv.org/abs/1711.10056 Google Scholar
  16. Golovko VA, 2017. Deep learning: an overview and main paradigms. Opt Memory Neur Netw, 26(1):1–17.  https://doi.org/10.3103/S1060992X16040081 CrossRefGoogle Scholar
  17. Goodfellow IJ, Pouget–Abadie J, Mirza M, et al., 2014. Generative adversarial networks. https://doi.org/arxiv.org/abs/1406.2661 Google Scholar
  18. Goodfellow IJ, Shlens J, Szegedy C, 2015. Explaining and harnessing adversarial examples. https://doi.org/arxiv.org/abs/1412.6572 Google Scholar
  19. Gu SX, Rigazio L, 2015. Towards deep neural network architectures robust to adversarial examples. https://doi.org/arxiv.org/abs/1412.5068 Google Scholar
  20. Guan ZT, Li J, Wu LF, et al., 2017. Achieving efficient and secure data acquisition for cloud–supported Internet of Things in smart grid. IEEE Internet Things J, 4(6): 1934–1944.  https://doi.org/10.1109/JIOT.2017.2690522 CrossRefGoogle Scholar
  21. Hatcher WG, Yu W, 2018. A survey of deep learning: platforms, applications and emerging research trends. IEEE Access, 6:24411–24432.  https://doi.org/10.1109/ACCESS.2018.2830661 CrossRefGoogle Scholar
  22. He W, Wei J, Chen XY, et al., 2017. Adversarial example defenses: ensembles of weak defenses are not strong. https://doi.org/arxiv.org/abs/1706.04701 Google Scholar
  23. Kokila RT, Selvi ST, Govindarajan K, 2014. DDoS detection and analysis in SDN–based environment using support vector machine classifier. Proc 6th Int Conf on Advanced Computing, p.205–210.  https://doi.org/10.1109/ICoAC.2014.7229711 CrossRefGoogle Scholar
  24. Korczak J, Hernes M, 2017. Deep learning for financial time series forecasting in a–trader system. Proc Federated Conf on Computer Science and Information Systems, p.905–912.  https://doi.org/10.15439/2017F449 CrossRefGoogle Scholar
  25. Krotov D, Hopfield J, 2018. Dense associative memory is robust to adversarial inputs. Neur Comput, 30(12): 3151–3167.  https://doi.org/10.1162/neco_a_01143 CrossRefGoogle Scholar
  26. LeCun Y, Bengio Y, Hinton G, 2015. Deep learning. Nature, 521(7553):436–444.  https://doi.org/10.1038/Nature14539 CrossRefGoogle Scholar
  27. Lee H, Han S, Lee J, 2017. Generative adversarial trainer: defense to adversarial perturbations with GAN. https://doi.org/arxiv.org/abs/1705.03387 Google Scholar
  28. Li GL, Wu J, Li JH, et al., 2018. Service popularity–based smart resources partitioning for fog computing–enabled industrial Internet of Things. IEEE Trans Ind Inform, 14(10):4702–4711.  https://doi.org/10.1109/TII.2018.2845844 CrossRefGoogle Scholar
  29. Li LZ, Ota K, Dong MX, 2018a. Deep learning for smart industry: efficient manufacture inspection system with fog computing. IEEE Trans Ind Inform, 14(10):4665–4673.  https://doi.org/10.1109/TII.2018.2842821 CrossRefGoogle Scholar
  30. Li LZ, Ota K, Dong MX, 2018b. DeepNFV: a light–weight framework for intelligent edge network functions virtualization. IEEE Netw, in press.  https://doi.org/10.1109/MNET.2018.1700394 Google Scholar
  31. Liang B, Li HC, Su MQ, et al., 2017. Detecting adversarial image examples in deep networks with adaptive noise reduction. https://doi.org/arxiv.org/abs/1705.08378 Google Scholar
  32. Loukas G, Vuong T, Heartfield R, et al, 2018. Cloud–based cyber–physical intrusion detection for vehicles using deep learning. IEEE Access, 6:3491–3508.  https://doi.org/10.1109/ACCESS.2017.2782159 CrossRefGoogle Scholar
  33. Luo Y, Boix X, Roig G, et al., 2015. Foveation–based mechanisms alleviate adversarial examples. https://doi.org/arxiv.org/abs/1511.06292 Google Scholar
  34. Lyu C, Huang KZ, Liang HN, 2015. A unified gradient regularization family for adversarial examples. IEEE Int Conf on Data Mining, p.301–309.  https://doi.org/10.1109/ICDM.2015.84 CrossRefGoogle Scholar
  35. Manning CD, Surdeanu M, Bauer J, et al., 2014. The Stanford CoreNLP natural language processing toolkit. Proc 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations, p.55–60.  https://doi.org/10.3115/v1/P14-501.CrossRefGoogle Scholar
  36. McMahan HB, Moore E, Ramage D, et al., 2016. Communication–efficient learning of deep networks from decentralized data. https://doi.org/arxiv.org/abs/1602.05629 Google Scholar
  37. Meng DY, Chen H, 2017. MagNet: a two–pronged defense against adversarial examples. Proc ACM Conf on Computer and Communications Security, p.135–147.  https://doi.org/10.1145/3133956.3134057 CrossRefGoogle Scholar
  38. Meng WZ, Li WJ, Kwok LF, 2015. Design of intelligent KNNbased alarm filter using knowledge–based alert verification in intrusion detection. Secur Commun Netw, 8(18): 3883–3895.  https://doi.org/10.1002/sec.1307 CrossRefGoogle Scholar
  39. Meng X, Shan Z, Liu FD, et al., 2017. MCSMGS: malware classification model based on deep learning. Int Conf on Cyber–Enabled Distributed Computing and Knowledge Discovery, p.272–275.  https://doi.org/10.1109/CyberC.2017.21 Google Scholar
  40. Mnih V, Kavukcuoglu K, Silver D, et al., 2015. Human–level control through deep reinforcement learning. Nature, 518(7540):529–533.  https://doi.org/10.1038/nature14236 CrossRefGoogle Scholar
  41. Moon D, Im H, Kim I, et al., 2017. DTB–IDS: an intrusion detection system based on decision tree using behavior analysis for preventing APT attacks. J Supercomput, 73(7):2881–2895.  https://doi.org/10.1007/s11227-015-1604-8 CrossRefGoogle Scholar
  42. Moosavi–Dezfooli SM, Fawzi A, Frossard P, 2016. DeepFool: a simple and accurate method to fool deep neural networks. IEEE Conf on Computer Vision and Pattern Recognition, p.2574–2582.  https://doi.org/10.1109/CVPR.2016.282 Google Scholar
  43. Moosavi–Dezfooli SM, Fawzi A, Fawzi O, et al., 2017. Universal adversarial perturbations. Proc IEEE Conf on Computer Vision and Pattern Recognition, p.86–94.  https://doi.org/10.1109/CVPR.2017.17 CrossRefGoogle Scholar
  44. Mopuri KR, Garg U, Babu RV, 2017. Fast feature fool: a data independent approach to universal adversarial perturbations. https://doi.org/arxiv.org/abs/1707.05572 Google Scholar
  45. Nayebi A, Ganguli S, 2017. Biologically inspired protection of deep networks from adversarial attacks. https://doi.org/arxiv.org/abs/1703.09202 Google Scholar
  46. Olalere M, Abdullah MT, Mahmod R, et al., 2016. Identification and evaluation of discriminative lexical features of malware URL for real–time classification. Int Conf on Computer and Communication Engineering, p.90–95.  https://doi.org/10.1109/ICCCE.2016.31 CrossRefGoogle Scholar
  47. Ota K, Dao MS, Mezaris V, et al., 2017. Deep learning for mobile multimedia: a survey. ACM Trans Multim Comput Commun Appl, 13(3S), Article 34.  https://doi.org/10.1145/3092831 CrossRefGoogle Scholar
  48. Papernot N, McDaniel P, Jha S, et al., 2016. The limitations of deep learning in adversarial settings. IEEE European Symp on Security and Privacy, p.372–387.  https://doi.org/10.1109/EuroSP.2016.36 CrossRefGoogle Scholar
  49. Phong LT, Aono Y, Hayashi T, et al., 2018. Privacypreserving deep learning via additively homomorphic encryption. IEEE Trans Inform Forens Secur, 13(5): 1333–1345.  https://doi.org/10.1109/TIFS.2017.2787987 CrossRefGoogle Scholar
  50. Ren SQ, He KM, Girshick R, et al., 2017. Faster R–CNN: towards real–time object detection with region proposal networks. IEEE Trans Patt Anal Mach Intell, 39(6): 1137–1149.  https://doi.org/10.1109/TPAMI.2016.2577031 CrossRefGoogle Scholar
  51. Ross AS, Doshi–Velez F, 2017. Improving the adversarial robustness and interpretability of deep neural networks by regularizing their input gradients. https://doi.org/arxiv.org/abs/1711.09404 Google Scholar
  52. Sabour S, Cao YS, Faghri F, et al., 2015. Adversarial manipulation of deep representations. https://doi.org/arxiv.org/abs/1511.05122 Google Scholar
  53. Shahid N, Aleem SA, Naqvi IH, et al., 2012. Support vector machine based fault detection & classification in smart grids. IEEE Globecom Workshops, p.1526–1531.  https://doi.org/10.1109/GLOCOMW.2012.6477812 CrossRefGoogle Scholar
  54. Shokri R, Shmatikov V, 2015. Privacy–preserving deep learning. Proc 53rd Annual Allerton Conf on Communication, Control, and Computing, p.1310–1321.  https://doi.org/10.1109/ALLERTON.2015.7447103 Google Scholar
  55. Syarif AR, Gata W, 2017. Intrusion detection system using hybrid binary PSO and K–nearest neighborhood algorithm. 11th Int Conf on Information & Communication Technology and System, p.181–186.  https://doi.org/10.1109/ICTS.2017.8265667 CrossRefGoogle Scholar
  56. Vinayakumar R, Soman KP, Poornachandran P, et al., 2018. Detecting Android malware using long short–term memory (LSTM). J Int Fuzzy Syst, 34(3):1277–1288.  https://doi.org/10.3233/JIFS-16942.Google Scholar
  57. Vollmer T, Manic M, 2009. Computationally efficient neural network intrusion security awareness. Proc 2nd Int Symp on Resilient Control Systems, p.25–30.  https://doi.org/10.1109/ISRCS.2009.5251357 CrossRefGoogle Scholar
  58. Vuong TP, Loukas G, Gan D, et al., 2015. Decision tree–based detection of denial of service and command injection attacks on robotic vehicles. IEEE Int Workshop on Information Forensics and Security, p.1–6.  https://doi.org/10.1109/WIFS.2015.7368559 CrossRefGoogle Scholar
  59. Wu J, Dong MX, Ota K, et al., 2018. Big data analysis–based secure cluster management for optimized control plane in software–defined networks. IEEE Trans Netw Serv Manag, 15(1):27–38.  https://doi.org/10.1109/TNSM.2018.2799000 CrossRefGoogle Scholar
  60. Xie CH, Wang JY, Zhang ZS, et al., 2017. Adversarial examples for semantic segmentation and object detection. IEEE Int Conf on Computer Vision, p.1378–1387.  https://doi.org/10.1109/ICCV.2017.153 CrossRefGoogle Scholar
  61. Xin Y, Kong LS, Liu Z, et al., 2018. Machine learning and deep learning methods for cybersecurity. IEEE Access, 6:35365–35381.  https://doi.org/10.1109/ACCESS.2018.2836950 CrossRefGoogle Scholar
  62. Xu WL, Evans D, Qi YJ, 2017. Feature squeezing mitigates and detects Carlini/Wagner adversarial examples. https://doi.org/arxiv.org/abs/1705.10686 Google Scholar
  63. Yuan XY, 2017. PhD forum: deep learning–based real–time malware detection with multi–stage analysis. IEEE Int Conf on Smart Computing, p.1–2.  https://doi.org/10.1109/SMARTCOMP.2017.7946997 Google Scholar
  64. Zhao GZ, Zhang CX, Zheng LJ, 2017. Intrusion detection using deep belief network and probabilistic neural network. IEEE Int Conf on Computational Science and Engineering and IEEE Int Conf on Embedded and Ubiquitous Computing, p.639–642.  https://doi.org/10.1109/CSE-EUC.2017.119 CrossRefGoogle Scholar
  65. Zhu DL, Jin H, Yang Y, et al., 2017. DeepFlow: deep learning–based malware detection by mining Android application for abnormal usage of sensitive data. IEEE Symp on Computers and Communications, p.438–443.  https://doi.org/10.1109/ISCC.2017.8024568 Google Scholar
  66. Zolotukhin M, Hämäläinen T, Kokkonen T, et al., 2016. Increasing web service availability by detecting application–layer DDoS attacks in encrypted traffic. Proc 23rd Int Conf on Telecommunications, p.1–6.  https://doi.org/10.1109/ICT.2016.7500408 Google Scholar

Copyright information

© Editorial Office of Journal of Zhejiang University Science and Springer-Verlag GmbH Germany, part of Springer Nature 2018

Authors and Affiliations

  1. 1.School of Cyber SecurityShanghai Jiao Tong UniversityShanghaiChina

Personalised recommendations