Skip to main content

Adversarial Sample Crafting for Time Series Classification with Elastic Similarity Measures

  • Conference paper
  • First Online:
Intelligent Distributed Computing XII (IDC 2018)

Part of the book series: Studies in Computational Intelligence ((SCI,volume 798))

Included in the following conference series:

Abstract

Adversarial Machine Learning (AML) refers to the study of the robustness of classification models when processing data samples that have been intelligently manipulated to confuse them. Procedures aimed at furnishing such confusing samples exploit concrete vulnerabilities of the learning algorithm of the model at hand, by which perturbations can make a given data instance to be misclassified. In this context, the literature has so far gravitated on different AML strategies to modify data instances for diverse learning algorithms, in most cases for image classification. This work builds upon this background literature to address AML for distance based time series classifiers (e.g., nearest neighbors), in which attacks (i.e. modifications of the samples to be classified by the model) must be intelligently devised by taking into account the measure of similarity used to compare time series. In particular, we propose different attack strategies relying on guided perturbations of the input time series based on gradient information provided by a smoothed version of the distance based model to be attacked. Furthermore, we formulate the AML sample crafting process as an optimization problem driven by the Pareto trade-off between (1) a measure of distortion of the input sample with respect to its original version; and (2) the probability of the crafted sample to confuse the model. In this case, this formulated problem is efficiently tackled by using multi-objective heuristic solvers. Several experiments are discussed so as to assess whether the crafted adversarial time series succeed when confusing the distance based model under target.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 219.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Akhtar, N., Mian, A.: Threat of adversarial attacks on deep learning in computer vision: a survey. arXiv preprint arXiv:180100553 (2018)

  2. Berndt, D.J., Clifford, J.: Using dynamic time warping to find patterns in time series. In: Workshop on Knowledge Discovery in Databases, Seattle, WA, pp. 359–370 (1994)

    Google Scholar 

  3. Biggio, B., Corona, I., Maiorca, D., Nelson, B., Šrndić, N., Laskov, P., Giacinto, G., Roli, F.: Evasion attacks against machine learning at test time. In: Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 387–402. Springer (2013)

    Google Scholar 

  4. Chen, Y., Keogh, E., Hu, B., Begum, N., Bagnall, A., Mueen, A., Batista, G.: The UCR Time Series Classification Archive (2015). www.cs.ucr.edu/~eamonn/time_series_data/

  5. Deb, K., Pratap, A., Agarwal, S., Meyarivan, T.: A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 6(2), 182–197 (2002)

    Article  Google Scholar 

  6. Ding, H., Trajcevski, G., Scheuermann, P., Wang, X., Keogh, E.: Querying and mining of time series data: experimental comparison of representations and distance measures. Proc. VLDB Endow. 1(2), 1542–1552 (2008)

    Article  Google Scholar 

  7. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv preprint arXiv:14126572 (2014)

  8. ten Holt, G.A., Reinders, M.J., Hendriks, E.: Multi-dimensional dynamic time warping for gesture recognition. In: Conference of the Advanced School for Computing and Imaging, vol. 300, p. 1 (2007)

    Google Scholar 

  9. Huang, L., Joseph, AD., Nelson, B., Rubinstein, BI., Tygar, J.: Adversarial machine learning. In: Proceedings of the 4th ACM Workshop on Security and Artificial Intelligence, pp. 43–58. ACM (2011a)

    Google Scholar 

  10. Huang, L., Joseph, A.D., Nelson, B., Rubinstein, B.I., Tygar, J.: Adversarial machine learning. In: Proceedings of the 4th ACM Workshop on Security and Artificial Intelligence, pp. 43–58. ACM (2011b)

    Google Scholar 

  11. Kurakin, A., Goodfellow, I., Bengio, S.: Adversarial examples in the physical world. arXiv preprint arXiv:160702533 (2016)

  12. Lana, I., Del Ser, J., Velez, M., Vlahogianni, E.I.: Road traffic forecasting: recent advances and new challenges. Proc. VLDB Endow. 10(2), 93–109 (2018)

    Google Scholar 

  13. Lines, J., Bagnall, A.: Time series classification with ensembles of elastic distance measures. Data Min. Knowl. Discov. 29(3), 565–592 (2015)

    Article  MathSciNet  Google Scholar 

  14. Miyato, T., Maeda, S., Koyama, M., Ishii, S.: Virtual adversarial training: a regularization method for supervised and semi-supervised learning. arXiv preprint arXiv:170403976 (2017)

  15. Molina-Solana, M., Ros, M., Ruiz, M.D., Gómez-Romero, J., Martín-Bautista, M.J.: Data science for building energy management: a review. Renew. Sustain. Energy Rev. 70, 598–609 (2017)

    Article  Google Scholar 

  16. Papernot, N., McDaniel, P., Goodfellow, I.: Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. arXiv preprint arXiv:160507277 (2016a)

  17. Papernot, N., McDaniel, P., Wu, X., Jha, S., Swami, A.: Distillation as a defense to adversarial perturbations against deep neural networks. In: 2016 IEEE Symposium on Security and Privacy (SP), pp. 582–597. IEEE (2016b)

    Google Scholar 

  18. Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celik, Z.B., Swami, A.: Practical black-box attacks against machine learning. In: Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, pp. 506–519. ACM (2017)

    Google Scholar 

  19. Sakoe, H., Chiba, S.: Dynamic programming algorithm optimization for spoken word recognition. IEEE Trans. Acoust. Speech Signal Process. 26(1), 43–49 (1978)

    Article  Google Scholar 

  20. Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., Fergus, R.: Intriguing properties of neural networks. arXiv preprint arXiv:13126199 (2013)

  21. Villar-Rodriguez, E., Del Ser, J., Oregi, I., Bilbao, M.N., Gil-Lopez, S.: Detection of non-technical losses in smart meter data based on load curve profiling and time series analysis. Energy 137, 118–128 (2017)

    Article  Google Scholar 

Download references

Acknowledgments

This work has been supported by the Basque Government through the EMAITEK, BERC 2014–2017 and the ELKARTEK programs, and by the Spanish Ministry of Economy and Competitiveness MINECO: BCAM Severo Ochoa excellence accreditation SVP-2014-068574 and SEV-2013-0323, and through the project TIN2017-82626-R funded by (AEI/FEDER, UE).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Izaskun Oregi .

Editor information

Editors and Affiliations

A Appendix: Computation of \(\text {P}(c|U^v)\) gradient

A Appendix: Computation of \(\text {P}(c|U^v)\) gradient

In this appendix we formally derive, for the DTW setting:

$$\begin{aligned} \nabla \text {P}(c|U^v) = \left( \frac{\partial \text {P}(c|U)}{\partial u_d}\right) _{d\in \{1,...,v\}}. \end{aligned}$$
(19)

Consider the DTW-SNN model introduced in Sect. 3. The partial derivative with respect the input variable \(u_d\) is given by:

$$\begin{aligned} \frac{\partial \text {P}(c|U)}{\partial u_d} = \frac{1}{N}\left[ \sum _{n=1}^{N} \delta _{c,y_n} \frac{\partial \sigma (U,U_n)}{\partial u_d} \right] . \end{aligned}$$
(20)

For the sake of a simpler notation, let us write the soft-max function as, \(\sigma (U,U_n) = H_1/H_2\), where:

$$\begin{aligned} H_1 = \exp \left[ -\text {DTW}(U,U_n)^2\right] \text { and } H_2 = \sum _{m=1}^N \exp \left[ -\text {DTW}(U,U_m)^2\right] . \end{aligned}$$
(21)

Equation (20) is, therefore, rewritten as follows:

$$\begin{aligned} \frac{\partial \text {P}(c|U)}{\partial u_d} = \frac{1}{N}\left[ \sum _{n=1}^{N} \delta _{c,y_n}\frac{\frac{\partial H_1}{\partial u_d}H_2 - H_1\frac{\partial H_2}{\partial u_d} }{H_2^2} \right] . \end{aligned}$$
(22)

To compute the partial derivatives of \(H_1\) and \(H_2\), note that we need to differentiate \(\text {DTW}(U, U_n)\). To this end, let \(p_n^{*}\) be the optimal alignment between \(U_n\) and U. In other words, let \(p_n^{*}\) be the alignment satisfying:

$$\begin{aligned} \text {DTW}(U, U_n) =\sqrt{\sum _{(i,j)\in p_n^{*}}(u_i - u_j^n)^2 }, \end{aligned}$$
(23)

where \(u_i\) and \(u_j^n\) are the i-th and j-th observations of U and \(U_n\) time series respectively (see Eqs. (8) and (9)).

Considering the equation above, the derivatives of \(H_1\) and \(H_2\) are given by:

$$\begin{aligned} \frac{\partial H_1}{\partial u_d} = -2 \sum _{(i,j)\in p_n^{*}} \delta _{i,d} (u_d - u_j^{n}) \exp \left[ -\text {DTW}(U^v,U_n)^2 \right] , \end{aligned}$$
(24)

and

$$\begin{aligned} \frac{\partial H_2}{\partial u_d} = -2 \sum _{m=1}^{N} \left\{ \sum _{(i,j)\in p_m^{*}} \delta _{i,d} (u_d - u_j^{m}) \exp \left[ -\text {DTW}(U,U_m)^2 \right] \right\} , \end{aligned}$$
(25)

where \(\delta _{i,d}\) is the Kronecker delta.

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Oregi, I., Del Ser, J., Perez, A., Lozano, J.A. (2018). Adversarial Sample Crafting for Time Series Classification with Elastic Similarity Measures. In: Del Ser, J., Osaba, E., Bilbao, M., Sanchez-Medina, J., Vecchio, M., Yang, XS. (eds) Intelligent Distributed Computing XII. IDC 2018. Studies in Computational Intelligence, vol 798. Springer, Cham. https://doi.org/10.1007/978-3-319-99626-4_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-99626-4_3

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-99625-7

  • Online ISBN: 978-3-319-99626-4

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics