Advertisement

Introduction

  • Piotr AntonikEmail author
Chapter
Part of the Springer Theses book series (Springer Theses)

Abstract

In this chapter we will address three questions: (1) What is reservoir computing? (2) What does it have to do with optics and electronics? (3) What are FPGAs?

References

  1. 1.
    Fernando, Chrisantha and Sampsa Sojakka. 2003. Pattern recognition in a bucket. In European conference on artificial life, 588–597. SpringerCrossRefGoogle Scholar
  2. 2.
    van Leeuwen, Jan. 1990. Handbook of theoretical computer science: Algorithms and complexity. ElsevierGoogle Scholar
  3. 3.
    Ralston, Anthony, Edwin D. Reilly, and David Hemmendinger. 2000. Encyclopedia of computer science. Nature Publishing Group.Google Scholar
  4. 4.
    Reilly, Edwin D. 2003. Milestones in computer science and information technology. Greenwood Publishing GroupGoogle Scholar
  5. 5.
    Tucker, Allen B. 2004. Computer science handbook. CRC PressGoogle Scholar
  6. 6.
    Peter, J. 2005. Denning. Is computer science science?". Communications of the ACM 48 (4): 27–31.CrossRefGoogle Scholar
  7. 7.
    Winston, Patrick Herny. 1984. Artificial intelligence. Addison-WesleyGoogle Scholar
  8. 8.
    Michalski, Ryszard S., Jaime G. Carbonell, and Tom M. Mitchell. 1984. Machine learning an artificial intelligence approach. Morgan Kaufmann Publication IncorporatedGoogle Scholar
  9. 9.
    Mitchell, Tim Michael. 1997. Machine learning. McGraw-Hill EducationGoogle Scholar
  10. 10.
    Russell, Stuart Jonathan, Peter Norvig, John F Canny, Jitendra M Malik, and Douglas D Edwards. 2003. Artificial intelligenc e: A modern approach. Prentice hall Upper Saddle River.Google Scholar
  11. 11.
    Bishop, Christopher M. 2006. Pattern recognition and machine learning. SpringerGoogle Scholar
  12. 12.
    Hastie, Trevor, Robert Tibshirani, and Jerome Friedman. 2013. The elements of statistical learning: data mining, inference, and prediction. New York: Springer. 34 Chapter I. IntroductionGoogle Scholar
  13. 13.
    Navada, A., A.N. Ansari, S. Patil, and B.A. Sonkamble. 2011. Overview of use of decision tree algorithms in machine learning. In 2011 IEEE control and system graduate research colloquium, 37–42. June 2011Google Scholar
  14. 14.
    Kotsiantis, S.B. 2013. Decision trees: a recent overview. Artificial Intelligence Review 39 (4): 261–283.CrossRefGoogle Scholar
  15. 15.
    Charniak, Eugene. 1991. Bayesian networks without tears. AI Magazine 12 (4): 50.Google Scholar
  16. 16.
    Nielsen, Thomas Dyhre, and Finn Verner Jensen. 2009. Bayesian networks and decision graphs. Springer Science & Business MediaGoogle Scholar
  17. 17.
    Dasarathy, Belur V. 1991. Nearest neighbor (NN) norms: NN pattern classification techniquesGoogle Scholar
  18. 18.
    Naomi, S. 1992. Altman. An introduction to kernel and nearest-neighbor nonparametric regression. The American Statistician 46 (3): 175–185.MathSciNetGoogle Scholar
  19. 19.
    Shakhnarovich, Gregory, Trevor Darrell, and Piotr Indyk. Nearest-neighbor methods in learning and vision: Theory and practice (neural information processing). The MIT PressGoogle Scholar
  20. 20.
    Cristianini, Nello, and John Shawe-Taylor. 2000. An introduction to support vector machines and other kernel-based learning methods. Cambridge university PressGoogle Scholar
  21. 21.
    Kecman, Vojislav. 2001. Learning and soft computing: Support vector ma- chines, neural networks, and fuzzy logic models. MIT PressGoogle Scholar
  22. 22.
    Steinwart, Ingo, and Andreas Christmann. 2008. Support vector machines. Springer Science & Business MediaGoogle Scholar
  23. 23.
    Salcedo-Sanz, Sancho. 2014. José Luis Rojo-Álvarez, Manel Martínez-Ramón, and Gustavo Camps-Valls. Support vector machines in engineering: An overview. In Wiley Interdisciplinary Reviews. Data Mining and Knowledge Discovery 4 (3): 234–267.CrossRefGoogle Scholar
  24. 24.
    Hertz, John, Anders Krogh, and Richard G. Palmer. 1991. Introduction to the theory of neural computation. Addison-Wesley/Addison Wesley LongmanADSCrossRefGoogle Scholar
  25. 25.
    Bishop, Christopher M. 1995. Neural networks for pattern recognition. Oxford University PressGoogle Scholar
  26. 26.
    Gurney, Kevin. 1997. An introduction to neural networks. CRC PressGoogle Scholar
  27. 27.
    Haykin, Symon. 1999. Neural networks: A comprehensive foundationGoogle Scholar
  28. 28.
    Yoshua, Bengio, Aaron Courville, and Pascal Vincent. 2013. Representation learning: A review and new perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence 35 (8): 1798–1828.CrossRefGoogle Scholar
  29. 29.
    Deng, Li, and Y. Dong. 2014. Foundations and trends®in signal processing. Signal Processing 7: 3–4.Google Scholar
  30. 30.
    LeCun, Yann, Yoshua Bengio, and Geoffrey Hinton. 2015. Deep learning. Nature 521 (7553): 436–444.ADSCrossRefGoogle Scholar
  31. 31.
    Schmidhuber, Jürgen. 2015. Deep learning in neural networks: An overview. Neural Networks 61: 85–117.CrossRefGoogle Scholar
  32. 32.
    Hastie, Trevor, Jerome Friedman, and Robert Tibshirani. 2001. Overview of supervised learning. In The elements of statistical learning, 9–40. SpringerCrossRefGoogle Scholar
  33. 33.
    Kotsiantis, Sotiris B., I. Zaharakis, and P. Pintelas. 2007. Supervised machine learning: A review of classification techniques. In Emerging artificial intelligence applications in computer engineering 160: 3–24.Google Scholar
  34. 34.
    Sutton, Richard S, Andrew G Barto. 1998. Reinforcement learning: Anintroduction. Vol. 1. 1. MIT Press CambridgeGoogle Scholar
  35. 35.
    Szepesvári, Csaba. 2009. Algorithms for reinforcement learning. Morgan and ClaypoolGoogle Scholar
  36. 36.
    Friedman, Jerome, Trevor Hastie, and Robert Tibshirani. 2001. The elements of statistical learning. Vol. 1. Springer Series in Sstatistics New YorkGoogle Scholar
  37. 37.
    Xu, Lei. 2001. An overview on unsupervised learning from data mining perspective. Advances in self-organising maps, 181–209. London: Springer, London.Google Scholar
  38. 38.
    Ghahramani, Zoubin. 2004. Unsupervised learning. In Advanced lectureson machine learning. Springer, 72–112.CrossRefGoogle Scholar
  39. 39.
    Chapelle, O., B. Schölkopf, and A. Zien. 2006. Semi-supervised Learning. Adaptive computation and machine learning: MIT Press.Google Scholar
  40. 40.
    McCulloch, Warren S., and Walter Pitts. 1943. A logical calculus of the ideas immanent in nervous activity. The bulletin of mathematical biophysics 5 (4): 115–133.MathSciNetCrossRefGoogle Scholar
  41. 41.
    Minsky, Marvin, and Seymour Papert. 1969. Perceptrons: Anlntroduction to computational geometry. Cambridge, Mass: MIT Press.zbMATHGoogle Scholar
  42. 42.
    Werbos, Paul. 1974. Beyond regression: New tools for prediction and analysis in the behavioral sciencesGoogle Scholar
  43. 43.
    Paul, J. 1990. Werbos. Backpropagation through time: What it does and howto do it". Proceedings of the IEEE 78 (10): 1550–1560.CrossRefGoogle Scholar
  44. 44.
    Alan, L. 1952. Hodgkin and Andrew F Huxley. A quantitative description of membrane current and its application to conduction and excitation in nerve. The Journal of physiology 117 (4): 500.CrossRefGoogle Scholar
  45. 45.
    FitzHugh, Richard. 1955. Mathematical models of threshold phenomena inthe nerve membrane. The Bulletin of Mathematical Biophysics 17: 257–278.CrossRefGoogle Scholar
  46. 46.
    Gerstner, Wulfram. 2001. A framework for spiking neuron models: The spikeresponse model. Handbook of Biological Physics 4: 469–516.CrossRefGoogle Scholar
  47. 47.
    Gerstner, Wulfram, and Kistler, Werner M. 2002. Spiking neuron models: Single neurons, populations, plasticity. Cambridge University PressGoogle Scholar
  48. 48.
    Izhikevich, Eugene M. 2004. Which model to use for cortical spiking neurons? IEEE transactions on neural networks 15 (5): 1063–1070.CrossRefGoogle Scholar
  49. 49.
    Haykin, Simon. 1998. Neural networks: A comprehensive foundation. Prentice Hall. 36 Chapter I. IntroductionGoogle Scholar
  50. 50.
    Rosenblatt, Frank. 1961. Principles of neurodynamics. Cornell Aeronautical Lab Inc Buffalo NY: Perceptrons and the theory of brain mechanisms. Tech. rep.Google Scholar
  51. 51.
    Mandic, Danilo P., and Jonathon A. Chambers et al. 2001. Recurrent neural networks for prediction: Learning algorithms, architectures and stability. Wiley Online LibraryGoogle Scholar
  52. 52.
    Lipton, Z.C., J. Berkowitz, and C. Elkan. 2015. A critical review of recurrent neural networks for sequence learning. In: ArXiv e-prints arXiv:1506.00019 (2015).
  53. 53.
    Turchetti, Claudio. 2004. Stochastic models of neural networks. Vol. 102. IOS PressGoogle Scholar
  54. 54.
    Wong, Eugene. 1991. Stochastic neural networks. Algorithmica 6 (1–6): 466.MathSciNetCrossRefGoogle Scholar
  55. 55.
    Maass, Wolfgang. 1997. Networks of spiking neurons: The third generation of neural network models. Neural Networks 10 (9): 1659–1671.CrossRefGoogle Scholar
  56. 56.
    Maass, Wolfgang, and Christopher M Bishop. 2001. Pulsed neural networks. MIT PressGoogle Scholar
  57. 57.
    Ponulak, Filip, and Andrzej Kasiński. 2011. Introduction to spiking neuralnetworks: Information processing, learning and applications. 71: 409–33.Google Scholar
  58. 58.
    Grüning, André and Sander M Bohte. 2014. Principles and challenges: Spiking neural networks. In ESANN.Google Scholar
  59. 59.
    Orr, Mark J.L. etal. 1996. Introduction to radial basis function networksGoogle Scholar
  60. 60.
    Bors, Adrian G. 2001. Introduction of the radial basis function (rbf) networks. Online symposium for electronics engineers. 1 (1): 1–7.Google Scholar
  61. 61.
    Wu, Yue, Hui Wang, Biaobiao Zhang, and K-L Du. Using radial basis function networks for function approximation and classification. In ISRN Applied Mathematics 2012 (2012).ADSMathSciNetCrossRefGoogle Scholar
  62. 62.
    Jaeger, Herbert, and Harald Haas. 2004. Harnessing nonlinearity: Predicting chaotic systems and saving energy in wireless communication. Science 304: 78–80.ADSCrossRefGoogle Scholar
  63. 63.
    Maass, Wolfgang, Thomas Natschláger, and Henry Markram. 2002. Realtime computing without stable states: A new framework for neural computation based on perturbations. Neural Computation 14: 2531–2560.CrossRefGoogle Scholar
  64. 64.
    Jaeger, Herbert. 2001. The echo state approach to analysing and training recurrent neural networks—with an Erratum note. In GMD report 148Google Scholar
  65. 65.
    Rodan, Ali, and Peter Tino. 2011. Minimum complexity echo state network. IEEE Transactions on Neural Networks 22: 131–144.CrossRefGoogle Scholar
  66. 66.
    Duport, François, Bendix Schneider, Anteo Smerieri, Marc Haelterman, and Serge Massar. 2012. All-optical reservoir computing. In Optics Express 20: 22783–22795. I.4. References 37ADSCrossRefGoogle Scholar
  67. 67.
    Dejonckheere, Antoine, François Duport, Anteo Smerieri, Li Fang, Jean-Louis Oudar, Marc Haelterman, and Serge Massar. 2014. All-optical reservoir computer based on saturation of absorption. Optics Express 22: 10868–10881.ADSCrossRefGoogle Scholar
  68. 68.
    Antonik, Piotr, Marc Haelterman, and Serge Massar. 2017. Brain-inspired photonic signal processor for generating periodic patterns and emulating chaotic systems. In Physical Review Applied 7: 054014.Google Scholar
  69. 69.
    Amemiya, Takeshi. 1985. Advanced econometrics. Harvard University PressGoogle Scholar
  70. 70.
    Tikhonov, Andrei Nikolaevich, A.V. Goncharsky, V.V. Stepanov, and Anatoly G. Yagola. 1995. Numerical methods for the solution of ill-posed problems, vol. 328. Netherlands: Springer.CrossRefGoogle Scholar
  71. 71.
    Hermans, Michiel. 2012. Expanding the theoretical framework of reservoir computing. PhD thesis. Ghent UniversityGoogle Scholar
  72. 72.
    Singh, Jaspreet, Sandeep Ponnuru, and Upamanyu Madhow. 2009. Multigigabit communication: The ADC bottleneck. In IEEE international conference on Ultra-Wideband, 2009. ICUWB, 22–27.IEEEGoogle Scholar
  73. 73.
    Sobel, David Amory, and Robert W. Brodersen. 2009. A 1 Gb/s mixed-signal baseband analog front-end for a 60 GHz wireless receiver. IEEE Journal of Solid-State Circuits 44 (4): 1281–1289.CrossRefGoogle Scholar
  74. 74.
    Feng, Xiaodong, Guanghui He, and Jun Ma. 2010. A new approach to reduce the resolution requirement of the ADC for high data rate wireless receivers. In 2010 IEEE 10th international conference on signal processing (ICSP), 1565–1568. IEEEGoogle Scholar
  75. 75.
    Yong, Su-Khiong, Pengfei Xia, and Alberto Valdes-Garcia. 2011. 60 GHz technology for Gbps WLAN and WPAN: from theory to practice. WileyGoogle Scholar
  76. 76.
    Hassan, Khursheed, Theodore S Rappaport, and Jeffrey G Andrews. 2010. Analog equalization for low power 60 GHz receivers in realistic multipath channels. In 2010 IEEE global telecommunications conference (GLOBE-COM 2010), 1–5. IEEE.Google Scholar
  77. 77.
    Malone, Jerry, and Mark A. Wickert. 2011. Practical volterra equalizers for wideband satellite communications with twta nonlinearities. In Digital signal processing workshop and IEEE signal processing education workshop (DSP/SPE), 2011 IEEE, 48–53. IEEEGoogle Scholar
  78. 78.
    Bauduin, Marc, Anteo Smerieri, Serge Massar, and François Horlin. 2015. Equalization of the non-linear satellite communication channel with an echo state network. In Vehicular technology conference (VTC Spring), IEEE 81st, 1–5. IEEEGoogle Scholar
  79. 79.
    Mathews, V John, and Junghsi Lee. 1994. Adaptive algorithms for bilinear filtering. In SPIE’s 1994 international symposium on optics, imaging, and instrumentation. International Society for Optics and Photonics. 317–327.Google Scholar
  80. 80.
    Whitle, Peter. 1951. Hypothesis testing in time series analysis. Vol. 4. Almqvist & Wiksells. 38 Chapter I. IntroductionGoogle Scholar
  81. 81.
    Hannan, Edward James. 2009. Multiple time series. Vol. 38. Wiley & SonsGoogle Scholar
  82. 82.
    Paquot, Yvan, François Duport, Anteo Smerieri, Joni Dambre, Benjaminschrauwen, Marc Haelterman, and Serge Massar. 2012. Optoelectronic reservoir computing. Scientific Reports 2: 287.Google Scholar
  83. 83.
    Vinckier, Quentin, François Duport, Anteo Smerieri, Kristof Vandoorne, Peter Bienstman, Marc Haelterman, and Serge Massar. 2015. High-performance photonic reservoir computer based on a coherently driven passive cavity. Optica 2 (5): 438–446.CrossRefGoogle Scholar
  84. 84.
    Hermans, Michiel, Piotr Antonik, Marc Haelterman, and Serge Massar. 2016. Embodiment of learning in electro-optical signal processors. Physical Review Letters 117: 128301.Google Scholar
  85. 85.
    Schúrmann, Felix, Karlheinz Meier, and Johannes Schemmel. 2004. Edge of chaos computation in mixed-mode VLSI-A Hard liquid. In NIPS, 1201–1208.Google Scholar
  86. 86.
    Appeltant, Lennert, Miguel Cornelles Soriano, Guy Van der Sande, JanDanckaert, Serge Massar, Joni Dambre, Benjamin Schrauwen, Claudio R Mirasso, and Ingo Fischer. 2011. Information processing using a single dynamical node as complex system. Nature Communications 2: 468.ADSCrossRefGoogle Scholar
  87. 87.
    Larger, Laurent, M.C. Soriano, L. Daniel Brunner, Jose M. Appeltant, Luis Pesquera Gutiérrez, Claudio R. Mirasso, and Ingo Fischer. 2012. Photonic information processing beyond Turing: An optoelectronic implementation of reservoir computing. Optic Express 20: 3241–3249.ADSCrossRefGoogle Scholar
  88. 88.
    Martinenghi, Romain, Sergei Rybalko, Maxime Jacquot, Yanne KouomouChembo, and Laurent Larger. 2012. Photonic nonlinear transient computing with multiple-delay wavelength dynamics. Physical Review Letters 108: 244101.Google Scholar
  89. 89.
    Brunner, Daniel, Miguel C. Soriano, Claudio R. Mirasso, and Ingo Fischer. 2013. Parallel photonic information processing at gigabyte per second data rates using transient states. Nature Communications 4: 1364.ADSCrossRefGoogle Scholar
  90. 90.
    Vandoorne, Kristof, Pauline Mechet, Thomas Van Vaerenbergh, Martin Fiers, Geert Morthier, David Verstraeten, Benjamin Schrauwen, Joni Dambre, and Peter Bienstman. 2014. Experimental demonstration of reservoir computing on a silicon photonics chip. Nature Communications 5: 3541.Google Scholar
  91. 91.
    Haynes, Nicholas D., Miguel C. Soriano, David P. Rosin, Ingo Fischer, and Daniel J. Gauthier. 2015. Reservoir computing with a single timedelay autonomous Boolean node. Physical Review E 91 (2): 020801.ADSCrossRefGoogle Scholar
  92. 92.
    Torrejon, Jacob, Mathieu Riou, Flavio Abreu Araujo, Sumito Tsunegi,Guru Khalsa, Damien Querlioz, Paolo Bortolotti, Vincent Cros, Akio Fukushima, Hitoshi Kubota, et al. 2017. Neuromorphic computing with I.4. References 39nanoscale spintronic oscillators. In arXiv preprint arXiv:1701.07715
  93. 93.
    Larger, Laurent, Antonio Baylón-Fuentes, Romain Martinenghi, Vladimir S. Udaltsov, Yanne K. Chembo, and Maxime Jacquot. 2017. High-speed photonic reservoir computing using a time-delay-based architecture: Million words per second classification. Physical Review X 7, 011015.Google Scholar
  94. 94.
    Akrout, Akram, Arno Bouwens, François Duport, Quentin Vinckier, Marc Haelterman, and Serge Massar. 2016. Parallel photonic reservoir computing using frequency multiplexing of neurons. In arXiv:1612.08606
  95. 95.
    Akrout, Akram, Piotr Antonik, Marc Haelterman, and Serge Massar. 2017. Towards autonomous photonic reservoir computer based on frequency parallelism of neurons. Proceedins SPIE 10089. 100890S- 100890S-7.Google Scholar
  96. 96.
    Kadric, Edin. 2011. An FPGA implementation for a high-speed optical link with a PCIe interface. PhD thesisGoogle Scholar
  97. 97.
    Franz, Kaitlyn. 2015. History of the FPGA. http://blog.digilentinc.com/history-of-the-fpga/.
  98. 98.
    Wikipedia. Transistor. 2017. http://en.wikipedia.org/wiki/ Transistor.
  99. 99.
    Stavinov, Evgeni. 2011. 100 Power tips for FPGA designers. CreateSpace Independent Publishing PlatformGoogle Scholar
  100. 100.
    Waldrop, M. Mitchell. 2016. The chips are down for Moore’s law. Nature 530: 144–147.ADSCrossRefGoogle Scholar
  101. 101.
    Bright, Peter. 2016. Moore’s law really is dead this time. https://arstechnica.com/information-technology/2016/02/moores- law-really-is-dead-this-time/.
  102. 102.
    Virtex-6 Family Overview. 2012. DS150 (v2.4). Xilinx Inc.Google Scholar
  103. 103.
    Virtex-6 FPGA DSP48E1 Slice. 2011. UG369. Xilinx Inc.Google Scholar
  104. 104.
    Getting Started with the Xilinx Virtex-6 FPGA ML605 Evaluation Kit. 2011. UG533 (v1.5). Xilinx Inc..Google Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  1. 1.CentraleSupélecMetzFrance

Personalised recommendations