Advertisement

Pattern Matching in Sequential Data Using Reservoir Projections

  • Sebastián BasterrechEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11554)

Abstract

A relevant problem on data science is to define an efficient and reliable algorithm for finding specific patterns in a given signal. This type of problems often appears in medical applications, biophysical systems, complex systems, financial analysis, and several other domains. Here, we introduce a new model based in the ability of Recurrent Neural Networks (RNNs) for modelling time series. The technique encodes temporal information of the reference signal and the given query in a feature space. This encoding is done using a RNN. In the feature space, we apply similarity techniques for analysing differences among the projected points. The proposed method presents advantages with respect of state of art, it can produce good results using less computational costs. We discuss the proposal over three benchmark datasets.

Keywords

Recurrent Neural Networks Reservoir Computing Time series matching Similarity Echo State Property 

Notes

Acknowledgements

This work was supported by the projects SP2019/135 and SP2019/141 of the Student Grant System, VSB-Technical University of Ostrava, Czech Republic, and by the Ministry of Education, Youth and Sports from the Specific Research Projects (SP2019/135 and SP2019/141) and by The Technology Agency of the Czech Republic in the frame of the project TN01000024 National Competence Center-Cybernetics and Artificial Intelligence.

References

  1. 1.
    Basterrech, S.: Empirical analysis of the necessary and sufficient conditions of the echo state property. In: 2017 International Joint Conference on Neural Networks, IJCNN 2017, Anchorage, AK, USA, 14–19 May 2017, pp. 888–896 (2017).  https://doi.org/10.1109/IJCNN.2017.7965946
  2. 2.
    Basterrech, S., Rubino, G.: Echo state queueing networks: a combination of reservoir computing and random neural networks. Probab. Eng. Inf. Sci. 31, 457–476 (2017).  https://doi.org/10.1017/S0269964817000110Google Scholar
  3. 3.
    Bengio, Y.: Learning deep architectures for AI. Found. Trends Mach. Learn. 2(1), 1–127 (2009).  https://doi.org/10.1561/2200000006Google Scholar
  4. 4.
    Butcher, J.B., Verstraeten, D., Schrauwen, B., Day, C.R., Haycock, P.W.: Reservoir computing and extreme learning machines for non-linear time-series data analysis. Neural Netw. 38, 76–89 (2013)Google Scholar
  5. 5.
    Christos Faloutsos, M. Ranganathan, Y.M.: Fast subsequence matching in time-series databases. In: SIGMOD Conference, pp. 419–429 (1994)Google Scholar
  6. 6.
    Funahashi, K., Nakamura, Y.: Approximation of dynamical systems by continuous time recurrent neural networks. Neural Netw. 6, 801–806 (1993)Google Scholar
  7. 7.
    Gallicchio, C., Micheli, A.: Architectural and Markovian factors of echo state networks. Neural Netw. 24(5), 440–456 (2011).  https://doi.org/10.1016/j.neunet.2011.02.002Google Scholar
  8. 8.
    Gallicchio, C., Micheli, A., Pedrelli, L.: Deep reservoir computing: a critical experimental analysis. Neurocomputing 268(Supplement C), 87–99 (2017).  https://doi.org/10.1016/j.neucom.2016.12.089, http://www.sciencedirect.com/science/article/pii/S0925231217307567. advances in artificial neural networks, machine learning and computational intelligence
  9. 9.
    Hyndman, R.: Time series data library. http://robjhyndman.com/TSDL
  10. 10.
    Jaeger, H.: The “echo state” approach to analysing and training recurrent neural networks. Technical Report 148, German National Research Center for Information Technology (2001)Google Scholar
  11. 11.
    Jaeger, H.: Tutorial on training recurrent neural networks, covering BPPT, RTRL, EKF and the “echo state network” approach, Technical Report 148, German National Research Center for Information Technology (2002)Google Scholar
  12. 12.
    Jaeger, H., Lukos̆evic̆ius, M., Popovici, D., Siewert, U.: Optimization and applications of echo state networks with leaky-integrator neurons. Neural Netw. 20(3), 335–352 (2007)Google Scholar
  13. 13.
    Keogh, E., Smyth, P.: A probabilistic approach to fast pattern matching in time series databases. AAAI Technical Report WS-98-07, pp. 52–57 (1998)Google Scholar
  14. 14.
    Lukos̆evic̆ius, M.: On self-organizing reservoirs and their hierarchies. Technical Report 25, Jacobs University, Bremen (2010)Google Scholar
  15. 15.
    Lukos̆evic̆ius, M., Jaeger, H.: Reservoir computing approaches to recurrent neural network training. Comput. Sci. Rev. 3, 127–149 (2009).  https://doi.org/10.1016/j.cosrev2009.03.005Google Scholar
  16. 16.
    Maass, W.: Noisy spiking neurons with temporal coding have more computational power than sigmoidal neurons. Technical Report TR-1999-037, Institute for Theorical Computer Science. Technische Universitaet Graz. Graz, Austria (1999). http://www.igi.tugraz.at/psfiles/90.pdf
  17. 17.
    Manjunath, G., Jaeger, H.: Echo state property linked to an input: exploring a fundamental characteristic of recurrent neural networks. Neural Comput. 25(3), 671–696 (2013).  https://doi.org/10.1162/NECO_a_00411Google Scholar
  18. 18.
    Martens, J., Sutskever, I.: Training deep and recurrent networks with hessian-free optimization. In: Montavon, G., Orr, G.B., Müller, K.-R. (eds.) Neural Networks: Tricks of the Trade. LNCS, vol. 7700, pp. 479–535. Springer, Heidelberg (2012).  https://doi.org/10.1007/978-3-642-35289-8_27Google Scholar
  19. 19.
    Mueen, A., et al.: The fastest similarity search algorithm for time series subsequences under euclidean distance, August 2017. http://www.cs.unm.edu/~mueen/FastestSimilaritySearch.html
  20. 20.
    Rodan, A., Tino, P.: Simple deterministically constructed cycle reservoirs with regular jumps. Neural Comput. 24, 1822–1852 (2012).  https://doi.org/10.1162/NECO_a_00297Google Scholar
  21. 21.
    Rumelhart, D.E., Hinton, G.E., McClelland, J.L.: A general framework for parallel distributed processing. In: Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Computational Models of Cognition and Perception, vol. 1, chap. 2, pp. 45–76. MIT Press, Cambridge (1986)Google Scholar
  22. 22.
    Schmidhuber, J., Wierstra, D., Gagliolo, M., Gomez, F.: Training recurrent networks by Evolino. Neural Netw. 19, 757–779 (2007)Google Scholar
  23. 23.
    Siegelmann, H.T., Sontag, E.D.: Turing computability with neural nets. Appl. Math. Lett. 4(6), 77–80 (1991).  https://doi.org/10.1016/0893-9659(91)90080-FGoogle Scholar
  24. 24.
    Verstraeten, D., Schrauwen, B., D’Haene, M., Stroobandt, D.: An experimental unification of reservoir computing methods. Neural Netw. 20(3), 287–289 (2007)Google Scholar
  25. 25.
    Wainrib, G., Galtier, M.N.: A local Echo State Property through the largest Lyapunov exponent. Neural Netw. 76, 39–45 (2016)Google Scholar
  26. 26.
    Wang, D., Li, M.: Stochastic configuration networks: fundamentals and algorithms. IEEE Trans. Cybern. 47(10), 3466–3479 (2017).  https://doi.org/10.1109/TCYB.2017.2734043Google Scholar
  27. 27.
    LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521, 436–444 (2015)Google Scholar
  28. 28.
    Yildiza, I.B., Jaeger, H., Kiebela, S.J.: Re-visiting the echo state property. Neural Netw. 35, 1–9 (2012).  https://doi.org/10.1016/j.neunet.2012.07.005Google Scholar
  29. 29.
    Zhang, B., Miller, D.J., Wang, Y.: Nonlinear system modeling with random matrices: echo state networks revisited. Neural Netw. 76, 39–45 (2016)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Department of Computer Science, Faculty of Electrical Engineering and InformaticsVSB-Technical University of OstravaOstrava-PorubaCzech Republic

Personalised recommendations