Skip to main content

Reservoir Computing as a Model for In-Materio Computing

  • Chapter
  • First Online:
Advances in Unconventional Computing

Part of the book series: Emergence, Complexity and Computation ((ECC,volume 22))

Abstract

Research in substrate-based computing has shown that materials contain rich properties that can be exploited to solve computational problems. One such technique known as Evolution-in-materio uses evolutionary algorithms to manipulate material substrates for computation. However, in general, modelling the computational processes occurring in such systems is a difficult task and understanding what part of the embodied system is doing the computation is still fairly ill-defined. This chapter discusses the prospects of using Reservoir Computing as a model for in-materio computing, introducing new training techniques (taken from Reservoir Computing) that could overcome training difficulties found in the current Evolution-in-Materio technique.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 219.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    NASCENCE homepage: nascence.no.

References

  1. Adamatzky, A.: Physarum Machines: Computers from Slime Mould, vol. 74. World Scientific, Singapore (2010)

    Google Scholar 

  2. Adamatzky, A., Costello, B., Asai, T.: Reaction-Diffusion Computers. Elsevier, Amsterdam (2005)

    Google Scholar 

  3. Adamatzky, A., Erokhin, V., Grube, M., Schubert, T., Schumann, A.: Physarum chip project: growing computers from slime mould. IJUC 8(4), 319–323 (2012)

    Google Scholar 

  4. Alexandre, L.A., Embrechts, M.J., Linton, J.: Benchmarking reservoir computing on time-independent classification tasks. In: International Joint Conference on Neural Networks IJCNN 2009, pp. 89–93. IEEE (2009)

    Google Scholar 

  5. Antonelo, E.A., Schrauwen, B., Van Campenhout, J.: Generative modeling of autonomous robots and their environments using reservoir computing. Neural Process. Lett. 26(3), 233–249 (2007)

    Article  Google Scholar 

  6. Appeltant, L., Soriano, M.C., Van der Sande, G., Danckaert, J., Massar, S., Dambre, J., Schrauwen, B., Mirasso, C.R., Fischer, I.: Information processing using a single dynamical node as complex system. Nature Commun. 2, 468 (2011)

    Article  Google Scholar 

  7. Appeltant, L., Van der Sande, G., Danckaert, J., Fischer, I.: Constructing optimized binary masks for reservoir computing with delay systems. Sci. Rep. 4(3629) (2014)

    Google Scholar 

  8. Atiya, A.F., Parlos, A.G.: New results on recurrent network training: unifying the algorithms and accelerating convergence. IEEE Trans. Neural Netw. 11(3), 697–709 (2000)

    Article  Google Scholar 

  9. Beggs, J.M.: The criticality hypothesis: how local cortical networks might optimize information processing. Philos. Trans. R. Soc. Lond. A: Math., Phys. Eng. Sci. 366(1864), 329–343 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  10. Belkin, A., Hubler, A., Bezryadin, A.: Self-assembled wiggling nano-structures and the principle of maximum entropy production. Sci. Rep. 5(8323) (2015)

    Google Scholar 

  11. Bertschinger, N., Natschläger, T.: Real-time computation at the edge of chaos in recurrent neural networks. Neural Comput. 16(7), 1413–1436 (2004)

    Article  MATH  Google Scholar 

  12. Bird, J., Layzell, P.: The evolved radio and its implications for modelling the evolution of novel sensors. In: Proceedings of the Congress on Evolutionary Computation CEC’02, vol. 2, pp. 1836–1841. IEEE (2002)

    Google Scholar 

  13. Boedecker, J., Obst, O., Lizier, J.T., Mayer, N.M., Asada, M.: Information processing in echo state networks at the edge of chaos. Theory Biosci. 131(3), 205–213 (2012)

    Article  Google Scholar 

  14. Bose, S.K., Lawrence, C.P., Liu, Z., Makarenko, K.S., van Damme, R.M.J., Broersma, H.J., van der Wiel, W.G.: Evolution of a designless nanoparticle network into reconfigurable boolean logic. Nature Nanotechnol. (2015). doi:10.1038/nnano.2015.207

  15. Broersma, H., Gomez, F., Miller, J., Petty, M., Tufte, G.: Nascence project: nanoscale engineering for novel computation using evolution. Int. J. Unconv. Comput. 8(4), 313–317 (2012)

    Google Scholar 

  16. Bürger, J., Goudarzi, A., Stefanovic, D., Teuscher, C.: Composing a reservoir of memristive networks for real-time computing. arXiv:1504.02833 (2015)

  17. Burger, J., Teuscher, C.: Variation-tolerant computing with memristive reservoirs. In: 2013 IEEE/ACM International Symposium on Nanoscale Architectures (NANOARCH), pp. 1–6. IEEE (2013)

    Google Scholar 

  18. Büsing, L., Schrauwen, B., Legenstein, R.: Connectivity, dynamics, and memory in reservoir computing with binary and analog neurons. Neural Comput. 22(5), 1272–1311 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  19. Carbajal, J.P., Dambre, J., Hermans, M., Schrauwen, B.: Memristor models for machine learning. Neural Comput. 27(3), 725–747 (2015)

    Article  Google Scholar 

  20. Chatzidimitriou, K.C., Mitkas, P.A.: A NEAT way for evolving echo state networks. In: ECAI 2010, pp. 909–914. IOS Press (2010)

    Google Scholar 

  21. Chrol-Cannon, J., Jin, Y.: On the correlation between reservoir metrics and performance for time series classification under the influence of synaptic plasticity. PloS One 9(7), e101792 (2014)

    Article  Google Scholar 

  22. Clegg, K.D., Miller, J.F., Massey, M.K., Petty, M.: Travelling salesman problem solved ‘in materio’ by evolved carbon nanotube device. In: Parallel Problem Solving from Nature–PPSN XIII, pp. 692–701. Springer, Heidelberg (2014)

    Google Scholar 

  23. Clegg, K.D., Miller, J.F., Massey, M.K., Petty, M.C.: Practical issues for configuring carbon nanotube composite materials for computation. In: IEEE International Conference on Evolvable Systems, ICES 2014, pp. 61–68. IEEE (2014)

    Google Scholar 

  24. Dai, X.: Genetic regulatory systems modeled by recurrent neural network. In: Advances in Neural Networks-ISNN 2004, pp. 519–524. Springer, Heidelberg (2004)

    Google Scholar 

  25. Dasgupta, S., Wörgötter, F., Manoonpong, P.: Information theoretic self-organised adaptation in reservoirs for temporal memory tasks. In: Engineering Applications of Neural Networks, pp. 31–40. Springer, Heidelberg (2012)

    Google Scholar 

  26. Derrida, B., Pomeau, Y.: Random networks of automata: a simple annealed approximation. EPL (Europhys. Lett.) 1(2), 45 (1986)

    Article  Google Scholar 

  27. Dominey, P.F.: Complex sensory-motor sequence learning based on recurrent state representation and reinforcement learning. Biol. Cybern. 73(3), 265–274 (1995)

    Article  MATH  Google Scholar 

  28. Fernando, C., Sojakka, S.: Pattern recognition in a bucket. In: Advances in Artificial Life, pp. 588–597. Springer, Heidelberg (2003)

    Google Scholar 

  29. Fiers, M., Van Vaerenbergh, T., Wyffels, F., Verstraeten, D., Dambre, J., Schrauwen, B., Bienstman, P.: Nanophotonic reservoir computing with photonic crystal cavities to generate periodic patterns. IEEE Trans. Neural Netw. Learn. Syst. 25(2), 344–355 (2014)

    Article  Google Scholar 

  30. Funahashi, K., Nakamura, Y.: Approximation of dynamical systems by continuous time recurrent neural networks. Neural Netw. 6(6), 801–806 (1993)

    Article  Google Scholar 

  31. Gibbons, T.E.: Unifying quality metrics for reservoir networks. In: IJCNN 2010, The International Joint Conference on Neural Networks, pp. 1–7. IEEE (2010)

    Google Scholar 

  32. Goldstein, J., Newbury, D.E., Echlin, P., Joy, D.C., Romig, A.D Jr., Lyman, C.E., Fiori, C., Lifshin, E.: Scanning Electron Microscopy and X-ray Microanalysis: A Text for Biologists, Materials Scientists, and Geologists. Springer Science & Business Media, Heidelberg (2012)

    Google Scholar 

  33. Goudarzi, A., Lakin, M.R., Stefanovic, D.: DNA reservoir computing: a novel molecular computing approach. In: DNA Computing and Molecular Programming, pp. 76–89. Springer, Heidelberg (2013)

    Google Scholar 

  34. Greenwood, G.W., Tyrrell, A.M.: Introduction to Evolvable Hardware: A Practical Guide for Designing Self-Adaptive Systems, vol. 5. Wiley, New York (2006)

    Google Scholar 

  35. Gutierrez, J.M., Hinkley, T., Ward Taylor, J., Yanev, K., Cronin, L.: Evolution of oil droplets in a chemorobotic platform. Nature Commun. 5 (2014)

    Google Scholar 

  36. Haddow, P.C., Tyrrell, A.M.: Challenges of evolvable hardware: past, present and the path to a promising future. Genet. Program. Evolvable Mach. 12(3), 183–215 (2011)

    Article  Google Scholar 

  37. Hanoka, J.I., Bell, R.O.: Electron-beam-induced currents in semiconductors. Ann. Rev. Mater. Sci. 11(1), 353–380 (1981)

    Article  Google Scholar 

  38. Harding, S., Miller J.F.: Evolution in materio: a tone discriminator in liquid crystal. In: CEC 2004, Congress on Evolutionary Computation, vol. 2, pp. 1800–1807. IEEE (2004)

    Google Scholar 

  39. Harding, S., Miller J.F.: Evolution in materio: initial experiments with liquid crystal. In: 2004 NASA/DoD Conference on Evolvable Hardware, pp. 298–305. IEEE (2004)

    Google Scholar 

  40. Harding, S., Miller J.F.: Evolution in materio: a real-time robot controller in liquid crystal. In: 2005 NASA/DoD Conference on Evolvable Hardware, pp. 229–238. IEEE (2005)

    Google Scholar 

  41. Harding, S., Miller J.F.: Evolution in materio: evolving logic gates in liquid crystal. In: ECAL 2005 Workshop on Unconventional Computing: From cellular automata to wetware, pp. 133–149. Beckington, UK (2005)

    Google Scholar 

  42. Hermans, M., Burm, M., Van Vaerenbergh, T., Dambre, J., Bienstman, P.: Trainable hardware for dynamical computing using error backpropagation through physical media. Nature Commun. 6, (2015)

    Google Scholar 

  43. Higuchi, T., Iwata, M., Kajitani, I., Yamada, H., Manderick, B., Hirao, Y., Murakawa, M., Yoshizawa, S., Furuya, T.: Evolvable hardware with genetic learning. In: IEEE International Symposium on Circuits and Systems, ISCAS’96, vol. 4, pp. 29–32. IEEE (1996)

    Google Scholar 

  44. Jaeger, H.: The “echo state” approach to analysing and training recurrent neural networks-with an erratum note. Bonn, Germany: German National Research Center for Information Technology GMD Technical Report 148, 34 (2001)

    Google Scholar 

  45. Jaeger, H.: Short term memory in echo state networks. GMD-Forschungszentrum Informationstechnik (2001)

    Google Scholar 

  46. Jaeger, H.: Adaptive nonlinear system identification with echo state networks. In: Advances in Neural Information Processing Systems, pp. 593–600 (2002)

    Google Scholar 

  47. Jaeger, H.: Tutorial on training recurrent neural networks, covering BPPT, RTRL, EKF and the “echo state network” approach. GMD-Forschungszentrum Informationstechnik (2002)

    Google Scholar 

  48. Jaeger, H.: Discovering multiscale dynamical features with hierarchical echo state networks. Technical report No. 9 (2007)

    Google Scholar 

  49. Jaeger, H., Haas, H.: Harnessing nonlinearity: predicting chaotic systems and saving energy in wireless communication. Science 304(5667), 78–80 (2004)

    Article  Google Scholar 

  50. Jaeger, H., Lukoševičius, M., Popovici, D., Siewert, U.: Optimization and applications of echo state networks with leaky-integrator neurons. Neural Netw. 20(3), 335–352 (2007)

    Article  MATH  Google Scholar 

  51. Jarvis, S., Rotter, S., Egert, U.: Extending stability through hierarchical clusters in echo state networks. Front. Neuroinformatics 4, (2010)

    Google Scholar 

  52. Jones, B., Stekel, D., Rowe, J., Fernando, C.: Is there a liquid state machine in the bacterium escherichia coli? In: IEEE Symposium on Artificial Life, 2007. ALIFE’07, pp. 187–191. IEEE (2007)

    Google Scholar 

  53. Kilian, J., Siegelmann, H.T.: The dynamic universality of sigmoidal neural networks. Inf. Comput. 128(1), 48–56 (1996)

    Article  MathSciNet  MATH  Google Scholar 

  54. Klampfl, S., David, S.V., Yin, P., Shamma, S.A., Maass, W.: Integration of stimulus history in information conveyed by neurons in primary auditory cortex in response to tone sequences. In: 39th Annual Conference of the Society for Neuroscience, Program, vol. 163 (2009)

    Google Scholar 

  55. Konkoli, Z., Wendin, G.: On information processing with networks of nano-scale switching elements. Int. J. Unconv. Comput. 10(5–6), 405–428 (2014)

    Google Scholar 

  56. Kotsialos, A., Massey, M.K., Qaiser, F., Zeze, D.A., Pearson, C., Petty, M.C.: Logic gate and circuit training on randomly dispersed carbon nanotubes. Int. J. Unconv. Comput. 10(5–6), 473–497 (2014)

    Google Scholar 

  57. Küçükemre, A.U.: Echo state networks for adaptive filtering. Ph.D. thesis, University of Applied Sciences (2006)

    Google Scholar 

  58. Kudithipudi, D., Merkel, C., Soltiz, M., Garrett, S.R., Robinson, E.P.: Design of neuromorphic architectures with memristors. In: Network Science and Cybersecurity, pp. 93–103. Springer, Heidelberg (2014)

    Google Scholar 

  59. Kulkarni, M.S., Teuscher, C.: Memristor-based reservoir computing. In: IEEE/ACM International Symposium on Nanoscale Architectures, NANOARCH, 2012, pp. 226–232. IEEE (2012)

    Google Scholar 

  60. Langton, C.G.: Computation at the edge of chaos: phase transitions and emergent computation. Phys. D: Nonlinear Phenom. 42(1), 12–37 (1990)

    Article  MathSciNet  Google Scholar 

  61. Larger, L., Soriano, M.C., Brunner, D., Appeltant, L., Gutiérrez, J.M., Pesquera, L., Mirasso, C.R., Fischer, I.: Photonic information processing beyond turing: an optoelectronic implementation of reservoir computing. Opt. Express 20(3), 3241–3249 (2012)

    Article  Google Scholar 

  62. Legenstein, R., Maass, W.: Edge of chaos and prediction of computational performance for neural circuit models. Neural Netw. 20(3), 323–334 (2007)

    Article  MATH  Google Scholar 

  63. Legenstein, R., Maass, W.: What makes a dynamical system computationally powerful. In: New Directions in Statistical Signal Processing: From Systems to Brain, pp. 127–154 (2007)

    Google Scholar 

  64. Lipson, H., Pollack, J.B.: Automatic design and manufacture of robotic lifeforms. Nature 406(6799), 974–978 (2000)

    Article  Google Scholar 

  65. Lohn, J.D., Linden, D.S., Hornby, G.S., Kraus, W.F., Rodriguez-Arroyo, A.: Evolutionary design of an X-band antenna for NASA’s space technology 5 mission. In: NASA/DoD Conference on Evolvable Hardware, pp. 155–155. IEEE (2003)

    Google Scholar 

  66. Lukoševičius, M.: A practical guide to applying echo state networks. In: Neural Networks: Tricks of the Trade, pp. 659–686. Springer, Heidelberg (2012)

    Google Scholar 

  67. Lukoševicius, M., Jaeger, H.: Overview of reservoir recipes. Technical report 11, Jacobs University Bremen (2007)

    Google Scholar 

  68. Lukoševičius, M., Jaeger, H.: Reservoir computing approaches to recurrent neural network training. Comput. Sci. Rev. 3(3), 127–149 (2009)

    Article  MATH  Google Scholar 

  69. Lukoševičius, M., Jaeger, H., Schrauwen, B.: Reservoir computing trends. KI-Künstliche Intelligenz 26(4), 365–371 (2012)

    Article  Google Scholar 

  70. Lykkebø, O.R., Harding, S., Tufte, G., Miller, J.F.: Mecobo: A hardware and software platform for in materio evolution. In: Unconventional Computation and Natural Computation, pp. 267–279. Springer, Heidelberg (2014)

    Google Scholar 

  71. Maass, W.: Liquid state machines: motivation, theory, and applications. In: Computability in Context: Computation and Logic in the Real World, pp. 275–296 (2010)

    Google Scholar 

  72. Maass, W., Natschläger, T., Markram, H.: Real-time computing without stable states: a new framework for neural computation based on perturbations. Neural Comput. 14(11), 2531–2560 (2002)

    Article  MATH  Google Scholar 

  73. Massey, M.K.: Presentation at NASCENCE Consortium Progress Meeting. Totnes, UK (2015)

    Google Scholar 

  74. Massey, M.K., Kotsialos, A., Qaiser, F., Zeze, D.A., Pearson, C., Volpati, D., Bowen, L., Petty, M.C.: Computing with carbon nanotubes: optimization of threshold logic gates using disordered nanotube/polymer composites. J. Appl. Phys. 117(13), 134903 (2015)

    Article  Google Scholar 

  75. Miller, J.F., Downing, K.: Evolution in materio: looking beyond the silicon box. In: NASA/DoD Conference on Evolvable Hardware 2002, pp. 167–176. IEEE (2002)

    Google Scholar 

  76. Miller, J.F., Harding, S., Tufte, G.: Evolution-in-materio: evolving computation in materials. Evol. Intell. 7(1), 49–67 (2014)

    Article  Google Scholar 

  77. Misra, J., Saha, I.: Artificial neural networks in hardware: a survey of two decades of progress. Neurocomputing 74(1), 239–255 (2010)

    Article  Google Scholar 

  78. Mohid, M., Miller, J.F., Harding, S., Tufte, G., Lykkebo, O.R., Massey, M.K., Petty, M.C.: Evolution-in-materio: a frequency classifier using materials. In: International Conference on Evolvable Systems, ICES 2014, pp. 46–53. IEEE (2014)

    Google Scholar 

  79. Mohid, M., Miller, J.F., Harding, S., Tufte, G., Lykkebo, O.R., Massey, M.K., Petty, M.C.: Evolution-in-materio: solving bin packing problems using materials. In: International Conference on Evolvable Systems, ICES 2014, pp. 38–45. IEEE (2014)

    Google Scholar 

  80. Mohid, M., Miller, J.F., Harding, S., Tufte, G., Lykkebø, O.R., Massey, M.K., Petty, M.C.: Evolution-in-materio: solving machine learning classification problems using materials. In: PPSN XIII, Parallel Problem Solving from Nature, pp. 721–730. Springer, Heidelberg (2014)

    Google Scholar 

  81. Nikolić, D., Haeusler, S., Singer, W., Maass, W.: Temporal dynamics of information content carried by neurons in the primary visual cortex. In: Advances in Neural Information Processing Systems, pp. 1041–1048 (2006)

    Google Scholar 

  82. Norton, D., Ventura, D.: Improving liquid state machines through iterative refinement of the reservoir. Neurocomputing 73(16), 2893–2904 (2010)

    Article  Google Scholar 

  83. Ozgur, Y.: Reservoir computing using cellular automata. arXiv:1410.0162 [cs.NE] (2014)

  84. Ozgur, Y.: Connectionist-symbolic machine intelligence using cellular automata based reservoir-hyperdimensional computing. arXiv:1503.00851 [cs.ET] (2015)

  85. Ozturk, M.C., Xu, D., Príncipe, J.C.: Analysis and design of echo state networks. Neural Comput. 19(1), 111–138 (2007)

    Article  MATH  Google Scholar 

  86. Packard, N.H.: Adaptation toward the edge of chaos. In: Kelso, J.A.S., Mandell, A.J., Shlesinger, M.F. (eds.) Dynamic Patterns in Complex Systems, pp. 293–301. World Scientific, Singapore (1988)

    Google Scholar 

  87. Paquot, Y., Duport, F., Smerieri, A., Dambre, J., Schrauwen, B., Haelterman, M., Massar, S.: Optoelectronic reservoir computing. Sci. Rep. 2, (2012)

    Google Scholar 

  88. Rodan, A., Tino, P.: Minimum complexity echo state network. IEEE Trans. Neural Netw. 22(1), 131–144 (2011)

    Article  Google Scholar 

  89. Rosenstein, M.T., Collins, J.J., De Luca, C.J.: A practical method for calculating largest Lyapunov exponents from small data sets. Phys. D: Nonlinear Phenom. 65(1), 117–134 (1993)

    Article  MathSciNet  MATH  Google Scholar 

  90. Samuelsen, E., Glette, K.: Real-world reproduction of evolved robot morphologies: automated categorization and evaluation. In: Applications of Evolutionary Computation, vol. 9028. LNCS, pp. 771–782. Springer, Heidelberg (2015)

    Google Scholar 

  91. Schmidhuber, J., Wierstra, D., Gagliolo, M., Gomez, F.: Training recurrent networks by evolino. Neural Comput. 19(3), 757–779 (2007)

    Article  MATH  Google Scholar 

  92. Scholkopf, B., Smola, A.J.: Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. MIT press (2001)

    Google Scholar 

  93. Schrauwen, B., Defour, J., Verstraeten, D., Van Campenhout, J.: The introduction of time-scales in reservoir computing, applied to isolated digits recognition. In: Artificial Neural Networks–ICANN 2007, pp. 471–479. Springer, Heidelberg (2007)

    Google Scholar 

  94. Schrauwen, B., Büsing, L., Legenstein, R.A.: On computational power and the order-chaos phase transition in reservoir computing. In: Advances in Neural Information Processing Systems, pp. 1425–1432 (2008)

    Google Scholar 

  95. Schrauwen, B., D’Haene, M., Verstraeten, D., Van Campenhout, J.: Compact hardware liquid state machines on fpga for real-time speech recognition. Neural Netw. 21(2), 511–523 (2008)

    Article  Google Scholar 

  96. Schrauwen, B., Wardermann, M., Verstraeten, D., Steil, J.J., Stroobandt, D.: Improving reservoirs using intrinsic plasticity. Neurocomputing 71(7), 1159–1171 (2008)

    Article  Google Scholar 

  97. Schumacher, J., Toutounji, H., Pipa, G.: An analytical approach to single node delay-coupled reservoir computing. In: Artificial Neural Networks and Machine Learning–ICANN 2013, pp. 26–33. Springer, Heidelberg (2013)

    Google Scholar 

  98. Schürmann, F., Meier, K., Schemmel, J.: Edge of chaos computation in mixed-mode vlsi-a hard liquid. In: Advances in Neural Information Processing Systems, pp. 1201–1208 (2004)

    Google Scholar 

  99. Shah, J.: Ultrafast Spectroscopy of Semiconductors and Semiconductor Nanostructures, vol. 115. Springer Science & Business Media, Heidelberg (1999)

    Google Scholar 

  100. Shawe-Taylor, J., Cristianini, N.: Kernel Methods for Pattern Analysis. Cambridge University Press, Cambridge (2004)

    Google Scholar 

  101. Sheridan, P., Ma, W., Lu, W.: Pattern recognition with memristor networks. In: IEEE International Symposium on Circuits and Systems, ISCAS 2014, pp. 1078–1081. IEEE (2014)

    Google Scholar 

  102. Sillin, H.O.: Neuromorphic hardware: the investigation of atomic switch networks as complex physical systems. Ph.D. thesis, University of California, Los Angeles (2015)

    Google Scholar 

  103. Sillin, H.O., Aguilera, R., Shieh, H., Avizienis, A.V., Aono, M., Stieg, A.Z., Gimzewski, J.K.: A theoretical and experimental study of neuromorphic atomic switch networks for reservoir computing. Nanotechnology 24(38), 384004 (2013)

    Article  Google Scholar 

  104. Steane, A.: Quantum computing. Rep. Prog. Phys. 61(2), 117 (1998)

    Article  MathSciNet  Google Scholar 

  105. Steil, J.J.: Backpropagation-decorrelation: online recurrent learning with o (n) complexity. In: 2004 IEEE International Joint Conference on Neural Networks, vol. 2, pp. 843–848. IEEE (2004)

    Google Scholar 

  106. Steil, J.J.: Online reservoir adaptation by intrinsic plasticity for backpropagation-decorrelation and echo state learning. Neural Netw. 20(3), 353–364 (2007)

    Article  MATH  Google Scholar 

  107. Steil, J.J.: Several ways to solve the MSO problem. In: ESANN, pp. 489–494 (2007)

    Google Scholar 

  108. Stepney, S.: The neglected pillar of material computation. Phys. D: Nonlinear Phenom. 237(9), 1157–1164 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  109. Stepney, S.: Nonclassical computation: a dynamical systems perspective. In: Rozenberg, G., Bäck, T., Kok, J.N. (eds) Handbook of Natural Computing, vol. 4, pp. 1979–2025. Springer, Heidelberg (2012)

    Google Scholar 

  110. Stepney, S., Braunstein, S.L., Clark, J.A., Tyrrell, A., Adamatzky, A., Smith, R.E., Addis, T., Johnson, C., Timmis, J., Welch, P.: Journeys in non-classical computation I: a grand challenge for computing research. Int. J. Parallel, Emergent Distrib. Syst. 20(1), 5–19 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  111. Stepney, S., Abramsky, S., Adamatzky, A., Johnson, C., Timmis, J.: Grand challenge 7: Journeys in non-classical computation. In: Visions of Computer Science, London, UK, September 2008, pp. 407–421. BCS (2008)

    Google Scholar 

  112. Stieg, A.Z., Avizienis, A.V., Sillin, H.O., Martin-Olmos, C., Aono, M., Gimzewski, J.K.: Emergent criticality in complex Turing B-type atomic switch networks. Adv. Mater. 24(2), 286–293 (2012)

    Article  Google Scholar 

  113. Stieg, A.Z., Avizienis, A.V., Sillin, H.O., Aguilera, R., Shieh, H., Martin-Olmos, C., Sandouk, E.J., Aono, M., Gimzewski, J.K.: Self-organization and emergence of dynamical structures in neuromorphic atomic switch networks. In: Memristor Networks, pp. 173–209. Springer, Heidelberg (2014)

    Google Scholar 

  114. Thompson, A.: An evolved circuit, intrinsic in silicon, entwined with physics. In: Evolvable Systems: From Biology to Hardware, pp. 390–405. Springer, Heidelberg (1997)

    Google Scholar 

  115. Triefenbach, F., Jalalvand, A., Schrauwen, B., Martens, J.: Phoneme recognition with large hierarchical reservoirs. In: Advances in Neural Information Processing Systems, pp. 2307–2315 (2010)

    Google Scholar 

  116. Triefenbach, F., Jalalvand, A., Demuynck, K., Martens, J.: Acoustic modeling with hierarchical reservoirs. IEEE Trans. Audio, Speech, Lang. Process. 21(11), 2439–2450 (2013)

    Article  Google Scholar 

  117. Vandoorne, K., Mechet, P., Van Vaerenbergh, T., Fiers, M., Morthier, G., Verstraeten, D., Schrauwen, B., Dambre, J., Bienstman, P.: Experimental demonstration of reservoir computing on a silicon photonics chip. Nature Commun. 5, (2014)

    Google Scholar 

  118. Verstraeten, D., Schrauwen, B.: On the quantification of dynamics in reservoir computing. In: Artificial Neural Networks–ICANN 2009, pp. 985–994. Springer, Heidelberg (2009)

    Google Scholar 

  119. Verstraeten, D., Schrauwen, B., Stroobandt, D., Van Campenhout, J.: Isolated word recognition with the liquid state machine: a case study. Inf. Process. Lett. 95(6), 521–528 (2005)

    Article  MATH  Google Scholar 

  120. Verstraeten, D., Schrauwen, B., d’Haene, M., Stroobandt, D.: An experimental unification of reservoir computing methods. Neural Netw. 20(3), 391–403 (2007)

    Article  MATH  Google Scholar 

  121. Volpati, D., Massey, M.K., Johnson, D.W., Kotsialos, A., Qaiser, F., Pearson, C., Coleman, K.S., Tiburzi, G., Zeze, D.A., Petty, M.C.: Exploring the alignment of carbon nanotubes dispersed in a liquid crystal matrix using coplanar electrodes. J. Appl. Phys. 117(12), 125303 (2015)

    Article  Google Scholar 

  122. Wendin, G., Vuillaume, D., Calame, M., Yitzchaik, S., Gamrat, C., Cuniberti, G., Beiu, V.: Symone project: synaptic molecular networks for bio-inspired information processing. Int. J. Unconv. Comput. 8(4), 325–332 (2012)

    Google Scholar 

  123. Whiting, J., de Lacy Costello, B., Adamatzky, A.: Slime mould logic gates based on frequency changes of electrical potential oscillation. Biosystems 124, 21–25 (2014)

    Article  Google Scholar 

  124. Xue, Y., Yang, L., Haykin, S.: Decoupled echo state networks with lateral inhibition. Neural Netw. 20(3), 365–376 (2007)

    Article  MATH  Google Scholar 

Download references

Acknowledgments

Matthew Dale is funded by a Defence Science and Technology Laboratory (DSTL) Ph.D. studentship.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Matthew Dale .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing Switzerland

About this chapter

Cite this chapter

Dale, M., Miller, J.F., Stepney, S. (2017). Reservoir Computing as a Model for In-Materio Computing. In: Adamatzky, A. (eds) Advances in Unconventional Computing. Emergence, Complexity and Computation, vol 22. Springer, Cham. https://doi.org/10.1007/978-3-319-33924-5_22

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-33924-5_22

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-33923-8

  • Online ISBN: 978-3-319-33924-5

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics