Abstract
Adversarial Machine Learning (AML) refers to the study of the robustness of classification models when processing data samples that have been intelligently manipulated to confuse them. Procedures aimed at furnishing such confusing samples exploit concrete vulnerabilities of the learning algorithm of the model at hand, by which perturbations can make a given data instance to be misclassified. In this context, the literature has so far gravitated on different AML strategies to modify data instances for diverse learning algorithms, in most cases for image classification. This work builds upon this background literature to address AML for distance based time series classifiers (e.g., nearest neighbors), in which attacks (i.e. modifications of the samples to be classified by the model) must be intelligently devised by taking into account the measure of similarity used to compare time series. In particular, we propose different attack strategies relying on guided perturbations of the input time series based on gradient information provided by a smoothed version of the distance based model to be attacked. Furthermore, we formulate the AML sample crafting process as an optimization problem driven by the Pareto trade-off between (1) a measure of distortion of the input sample with respect to its original version; and (2) the probability of the crafted sample to confuse the model. In this case, this formulated problem is efficiently tackled by using multi-objective heuristic solvers. Several experiments are discussed so as to assess whether the crafted adversarial time series succeed when confusing the distance based model under target.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Akhtar, N., Mian, A.: Threat of adversarial attacks on deep learning in computer vision: a survey. arXiv preprint arXiv:180100553 (2018)
Berndt, D.J., Clifford, J.: Using dynamic time warping to find patterns in time series. In: Workshop on Knowledge Discovery in Databases, Seattle, WA, pp. 359–370 (1994)
Biggio, B., Corona, I., Maiorca, D., Nelson, B., Šrndić, N., Laskov, P., Giacinto, G., Roli, F.: Evasion attacks against machine learning at test time. In: Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 387–402. Springer (2013)
Chen, Y., Keogh, E., Hu, B., Begum, N., Bagnall, A., Mueen, A., Batista, G.: The UCR Time Series Classification Archive (2015). www.cs.ucr.edu/~eamonn/time_series_data/
Deb, K., Pratap, A., Agarwal, S., Meyarivan, T.: A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 6(2), 182–197 (2002)
Ding, H., Trajcevski, G., Scheuermann, P., Wang, X., Keogh, E.: Querying and mining of time series data: experimental comparison of representations and distance measures. Proc. VLDB Endow. 1(2), 1542–1552 (2008)
Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv preprint arXiv:14126572 (2014)
ten Holt, G.A., Reinders, M.J., Hendriks, E.: Multi-dimensional dynamic time warping for gesture recognition. In: Conference of the Advanced School for Computing and Imaging, vol. 300, p. 1 (2007)
Huang, L., Joseph, AD., Nelson, B., Rubinstein, BI., Tygar, J.: Adversarial machine learning. In: Proceedings of the 4th ACM Workshop on Security and Artificial Intelligence, pp. 43–58. ACM (2011a)
Huang, L., Joseph, A.D., Nelson, B., Rubinstein, B.I., Tygar, J.: Adversarial machine learning. In: Proceedings of the 4th ACM Workshop on Security and Artificial Intelligence, pp. 43–58. ACM (2011b)
Kurakin, A., Goodfellow, I., Bengio, S.: Adversarial examples in the physical world. arXiv preprint arXiv:160702533 (2016)
Lana, I., Del Ser, J., Velez, M., Vlahogianni, E.I.: Road traffic forecasting: recent advances and new challenges. Proc. VLDB Endow. 10(2), 93–109 (2018)
Lines, J., Bagnall, A.: Time series classification with ensembles of elastic distance measures. Data Min. Knowl. Discov. 29(3), 565–592 (2015)
Miyato, T., Maeda, S., Koyama, M., Ishii, S.: Virtual adversarial training: a regularization method for supervised and semi-supervised learning. arXiv preprint arXiv:170403976 (2017)
Molina-Solana, M., Ros, M., Ruiz, M.D., Gómez-Romero, J., MartÃn-Bautista, M.J.: Data science for building energy management: a review. Renew. Sustain. Energy Rev. 70, 598–609 (2017)
Papernot, N., McDaniel, P., Goodfellow, I.: Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. arXiv preprint arXiv:160507277 (2016a)
Papernot, N., McDaniel, P., Wu, X., Jha, S., Swami, A.: Distillation as a defense to adversarial perturbations against deep neural networks. In: 2016 IEEE Symposium on Security and Privacy (SP), pp. 582–597. IEEE (2016b)
Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celik, Z.B., Swami, A.: Practical black-box attacks against machine learning. In: Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, pp. 506–519. ACM (2017)
Sakoe, H., Chiba, S.: Dynamic programming algorithm optimization for spoken word recognition. IEEE Trans. Acoust. Speech Signal Process. 26(1), 43–49 (1978)
Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., Fergus, R.: Intriguing properties of neural networks. arXiv preprint arXiv:13126199 (2013)
Villar-Rodriguez, E., Del Ser, J., Oregi, I., Bilbao, M.N., Gil-Lopez, S.: Detection of non-technical losses in smart meter data based on load curve profiling and time series analysis. Energy 137, 118–128 (2017)
Acknowledgments
This work has been supported by the Basque Government through the EMAITEK, BERC 2014–2017 and the ELKARTEK programs, and by the Spanish Ministry of Economy and Competitiveness MINECO: BCAM Severo Ochoa excellence accreditation SVP-2014-068574 and SEV-2013-0323, and through the project TIN2017-82626-R funded by (AEI/FEDER, UE).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
AÂ Appendix: Computation of \(\text {P}(c|U^v)\) gradient
AÂ Appendix: Computation of \(\text {P}(c|U^v)\) gradient
In this appendix we formally derive, for the DTW setting:
Consider the DTW-SNN model introduced in Sect. 3. The partial derivative with respect the input variable \(u_d\) is given by:
For the sake of a simpler notation, let us write the soft-max function as, \(\sigma (U,U_n) = H_1/H_2\), where:
Equation (20) is, therefore, rewritten as follows:
To compute the partial derivatives of \(H_1\) and \(H_2\), note that we need to differentiate \(\text {DTW}(U, U_n)\). To this end, let \(p_n^{*}\) be the optimal alignment between \(U_n\) and U. In other words, let \(p_n^{*}\) be the alignment satisfying:
where \(u_i\) and \(u_j^n\) are the i-th and j-th observations of U and \(U_n\) time series respectively (see Eqs. (8) and (9)).
Considering the equation above, the derivatives of \(H_1\) and \(H_2\) are given by:
and
where \(\delta _{i,d}\) is the Kronecker delta.
Rights and permissions
Copyright information
© 2018 Springer Nature Switzerland AG
About this paper
Cite this paper
Oregi, I., Del Ser, J., Perez, A., Lozano, J.A. (2018). Adversarial Sample Crafting for Time Series Classification with Elastic Similarity Measures. In: Del Ser, J., Osaba, E., Bilbao, M., Sanchez-Medina, J., Vecchio, M., Yang, XS. (eds) Intelligent Distributed Computing XII. IDC 2018. Studies in Computational Intelligence, vol 798. Springer, Cham. https://doi.org/10.1007/978-3-319-99626-4_3
Download citation
DOI: https://doi.org/10.1007/978-3-319-99626-4_3
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-99625-7
Online ISBN: 978-3-319-99626-4
eBook Packages: EngineeringEngineering (R0)