Evolutionary Optimization of Liquid State Machines for Robust Learning

  • Yan Zhou
  • Yaochu JinEmail author
  • Jinliang Ding
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11554)


Liquid State Machines (LSMs) are a computational model of spiking neural networks with recurrent connections in a reservoir. Although they are believed to be biologically more plausible, LSMs have not yet been as successful as other artificial neural networks in solving real world learning problems mainly due to their highly sensitive learning performance to different types of stimuli. To address this issue, a covariance matrix adaptation evolution strategy has been adopted in this paper to optimize the topology and parameters of the LSM, thereby sparing the arduous task of fine tuning the parameters of the LSM for different tasks. The performance of the evolved LSM is demonstrated on three complex real-world pattern classification problems including image recognition and spatio-temporal classification.


Liquid State Machine Evolution strategy CMA-ES Pattern recognition 


  1. 1.
    Carlson, K.D., Nageswaran, J.M., Dutt, N., Krichmar, J.L.: An efficient automated parameter tuning framework for spiking neural networks. Front. Neurosci. 8, 10 (2014)Google Scholar
  2. 2.
    Chrol-Cannon, J., Jin, Y.: Learning structure of sensory inputs with synaptic plasticity leads to interference. Front. Comput. Neurosci. 9, 103 (2015)Google Scholar
  3. 3.
    Gerstner, W., Kistler, W.M.: Spiking Neuron Models: Single Neurons, Populations, Plasticity. Cambridge University Press, Cambridge (2002)Google Scholar
  4. 4.
    Hansen, N., Ostermeier, A.: Completely derandomized self-adaptation in evolution strategies. Evol. Comput. 9(2), 159–195 (2001)Google Scholar
  5. 5.
    Jin, Y., Wen, R., Sendhoff, B.: Evolutionary multi-objective optimization of spiking neural networks. In: de Sá, J.M., Alexandre, L.A., Duch, W., Mandic, D. (eds.) ICANN 2007. LNCS, vol. 4668, pp. 370–379. Springer, Heidelberg (2007). Scholar
  6. 6.
    Kasabov, N., et al.: Evolving spiking neural networks for personalised modelling, classification and prediction of spatio-temporal patterns with a case study on stroke. Neurocomputing 134, 269–279 (2014)Google Scholar
  7. 7.
    Kudo, M., Toyama, J., Shimbo, M.: Multidimensional curve classification using passing-through regions. Pattern Recogn. Lett. 20(11), 1103–1111 (1999)Google Scholar
  8. 8.
    Lake, B.M., Salakhutdinov, R., Tenenbaum, J.B.: Human-level concept learning through probabilistic program induction. Science 350(6266), 1332–1338 (2015)Google Scholar
  9. 9.
    Maass, W., Natschläger, T., Markram, H.: Real-time computing without stable states: a new framework for neural computation based on perturbations. Neural Comput. 14(11), 2531–2560 (2002)Google Scholar
  10. 10.
    Marblestone, A.H., Wayne, G., Kording, K.P.: Toward an integration of deep learning and neuroscience. Front. Comput. Neurosci. 10, 94 (2016)Google Scholar
  11. 11.
    Meng, Y., Jin, Y., Yin, J.: Modeling activity-dependent plasticity in BCM spiking neural networks with application to human behavior recognition. IEEE Trans. Neural Netw. 22(12), 1952–1966 (2011)Google Scholar
  12. 12.
    Panda, P., Roy, K.: Learning to generate sequences with combination of hebbian and non-hebbian plasticity in recurrent spiking neural networks. Front. Neurosci. 11, 693 (2017)Google Scholar
  13. 13.
    Schliebs, S., Kasabov, N.: Evolving spiking neural network a survey. Evolving Syst. 4(2), 87–98 (2013)Google Scholar
  14. 14.
    Schuldt, C., Laptev, I., Caputo, B.: Recognizing human actions: a local SVM approach. In: Proceedings of the 17th International Conference on Pattern Recognition, 2004, vol. 3, pp. 32–36 (2004)Google Scholar
  15. 15.
    Song, S., Miller, K.D., Abbott, L.F.: Competitive hebbian learning through spike-timing-dependent synaptic plasticity. Nat. Neurosci. 3(9), 919–926 (2000)Google Scholar
  16. 16.
    Stimberg, M., Goodman, D., Benichoux, V., Brette, R.: Equation-oriented specification of neural models for simulations. Front. Neuroinf. 8, 6 (2014)Google Scholar
  17. 17.
    Wu, Q.X., McGinnity, T.M., Maguire, L.P., Glackin, B., Belatreche, A.: Learning under weight constraints in networks of temporal encoding spiking neurons. Neurocomputing 69(16), 1912–1922 (2006)Google Scholar
  18. 18.
    Xu, Q., Qi, Y., Yu, H., Shen, J., Tang, H., Pan, G.: CSNN: an augmented spiking based framework with perceptron-inception. In: IJCAI, pp. 1646–1652 (2018)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.State Key Laboratory of Synthetical Automation for Process IndustryNortheastern UniversityShenyangChina
  2. 2.Department of Computer ScienceUniversity of SurreyGuildfordUK

Personalised recommendations