Bayesian Optimisation of Large-scale Photonic Reservoir Computers

Abstract

Reservoir computing is a growing paradigm for simplified training of recurrent neural networks, with a high potential for hardware implementations. Numerous experiments in optics and electronics yield comparable performance with digital state-of-the-art algorithms. Many of the most recent works in the field focus on large-scale photonic systems, with tens of thousands of physical nodes and arbitrary interconnections. While this trend significantly expands the potential applications of photonic reservoir computing, it also complicates the optimisation of the high number of hyper-parameters of the system. In this work, we propose the use of Bayesian optimisation for efficient exploration of the hyper-parameter space in a minimum number of iteration. We test this approach on a previously reported large-scale experimental system, compare it with the commonly used grid search, and report notable improvements in performance and the number of experimental iterations required to optimise the hyper-parameters. Bayesian optimisation thus has the potential to become the standard method for tuning the hyper-parameters in photonic reservoir computing.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

References

  1. 1.

    Jaeger H. Science 2004;304(5667):78. https://doi.org/10.1126/science.1091277.

    Article  Google Scholar 

  2. 2.

    Maass W, Natschläger T, Markram H. Neural Comput 2002;14(11):2531. https://doi.org/10.1162/089976602760407955.

    Article  Google Scholar 

  3. 3.

    Lukośevičius M, Jaeger H. Comput Sci Rev 2009;3(3):127. https://doi.org/10.1016/j.cosrev.2009.03.005.

    Article  Google Scholar 

  4. 4.

    Appeltant L, Soriano M, der Sande GV, Danckaert J, Massar S, Dambre J, Schrauwen B, Mirasso C, Fischer I. 2011. Nature Communications 2(1). https://doi.org/10.1038/ncomms1476.

  5. 5.

    Paquot Y, Duport F, Smerieri A, Dambre J, Schrauwen B, Haelterman M, Massar S. 2012. Scientific reports 2(1). https://doi.org/10.1038/srep00287.

  6. 6.

    Larger L, Soriano MC, Brunner D, Appeltant L, Gutierrez JM, Pesquera L, Mirasso CR, Fischer I. 2012. , Vol. 20.

  7. 7.

    Martinenghi R, Rybalko S, Jacquot M, Chembo YK, Larger L. 2012. Phys Rev lett 108(24). https://doi.org/10.1103/physrevlett.108.244101.

  8. 8.

    Larger L, Baylón-Fuentes A, Martinenghi R, Udaltsov VS, Chembo YK, Jacquot M. 2017. Physical Review X 7(1). https://doi.org/10.1103/physrevx.7.011015.

  9. 9.

    Antonik P, Haelterman M, Massar S. Cogn Comput 2017;9(3):297. https://doi.org/10.1007/s12559-017-9459-3.

    Article  Google Scholar 

  10. 10.

    Duport F, Schneider B, Smerieri A, Haelterman M, Massar S. Opt Express 2012;20(20):22783. https://doi.org/10.1364/oe.20.022783.

    Article  Google Scholar 

  11. 11.

    Brunner D, Soriano MC, Mirasso CR, Fischer I. 2013. Nature Communications 4(1). https://doi.org/10.1038/ncomms2368.

  12. 12.

    Vinckier Q, Duport F, Smerieri A, Vandoorne K, Bienstman P, Haelterman M, Massar S. Optica 2015;2(5):438. https://doi.org/10.1364/optica.2.000438.

    Article  Google Scholar 

  13. 13.

    Akrout A, Bouwens A, Duport F, Vinckier Q, Haelterman M, Massar S. 2016. arXiv:1612.08606.

  14. 14.

    Vandoorne K, Mechet P, Vaerenbergh TV, Fiers M, Morthier G, Verstraeten D, Schrauwen B, Dambre J, Bienstman P. 2014. Nature Communications 5(1). https://doi.org/10.1038/ncomms4541.

  15. 15.

    Triefenbach F, Jalalvand A, Schrauwen B, Martens JP. 2010.

  16. 16.

    The 2006/07 forecasting competition for neural networks & computational intelligence. http://www.neural-forecasting-competition.com/NN3/http://www.neural-forecasting-competition.com/NN3/ (2006).

  17. 17.

    Coarer FDL, Sciamanna M, Katumba A, Freiberger M, Dambre J, Bienstman P, Rontani D. IEEE Journal of Selected Topics in Quantum Electronics 2018;24(6):1. https://doi.org/10.1109/jstqe.2018.2836985.

    Article  Google Scholar 

  18. 18.

    Bueno J, Maktoobi S, Froehly L, Fischer I, Jacquot M, Larger L, Brunner D. Optica 2018; 5(6):756. https://doi.org/10.1364/optica.5.000756.

    Article  Google Scholar 

  19. 19.

    Antonik P, Marsal N, Rontani D. IEEE Journal of Selected Topics in Quantum Electronics 2020;26 (1):1. https://doi.org/10.1109/jstqe.2019.2924138.

    Article  Google Scholar 

  20. 20.

    Antonik P, Marsal N, Brunner D, Rontani D. 2019. Nature Machine Intelligence. https://doi.org/10.1038/s42256-019-0110-8.

  21. 21.

    Dong J, Rafayelyan M, Krzakala F, Gigan S. IEEE Journal of Selected Topics in Quantum Electronics 2020;26(1):1. https://doi.org/10.1109/jstqe.2019.2936281.

    Article  Google Scholar 

  22. 22.

    Penkovsky B, Larger L, Brunner D. J Appl Phys 2018;124(16):162101. https://doi.org/10.1063/1.5039826.

    Article  Google Scholar 

  23. 23.

    Mockus J. J Glob Optim 1994;4(4):347. https://doi.org/10.1007/bf01099263.

    MathSciNet  Article  Google Scholar 

  24. 24.

    Brochu E, Cora VM, De Freitas N. 2010. arXiv:1012.2599.

  25. 25.

    Mockus J. 2012. Bayesian approach to global optimization: theory and applications. vol 37, Springer Science & Business Media.

  26. 26.

    Frazier PI. 2018. arXiv:1807.02811.

  27. 27.

    Yperman J, Becker T. 2016. arXiv:1611.05193.

  28. 28.

    Griffith A, Pomerance A, Gauthier DJ. Chaos: An Interdisciplinary Journal of Nonlinear Science 2019; 29(12):123108. https://doi.org/10.1063/1.5120710.

    MathSciNet  Article  Google Scholar 

  29. 29.

    Cerina L, Franco G, Santambrogio MD. 2019. Proceedings of ESANN.

  30. 30.

    Rasmussen CE, Williams CK. 2006. Gaussian process for machine learning. MIT Press.

  31. 31.

    Tikhonov AN, Goncharsky A, Stepanov V, Yagola AG, Vol. 328. Numerical methods for the solution of ill-posed problems. Netherlands: Springer; 1995.

    Google Scholar 

  32. 32.

    Antonik P, Duport F, Hermans M, Smerieri A, Haelterman M, Massar S. IEEE Trans Neural Netw Learn Syst 2017;28(11):2686. https://doi.org/10.1109/tnnls.2016.2598655.

    Article  Google Scholar 

  33. 33.

    Rodan A, Tino P. IEEE Trans Neural Netw 2011;22(1):131. https://doi.org/10.1109/tnn.2010.2089641.

    Article  Google Scholar 

  34. 34.

    der Sande GV, Brunner D, Soriano MC. 2017. Nanophotonics 6(3). https://doi.org/10.1515/nanoph-2016-0132.

  35. 35.

    MacKay DJC. Neural Computn 1992;4(3):448. https://doi.org/10.1162/neco.1992.4.3.448.

    Article  Google Scholar 

  36. 36.

    MathWorks. Gaussian process regression model. http://fr.mathworks.com/help/stats/fitrgp.htmlhttp://fr.mathworks.com/help/stats/fitrgp.html.

  37. 37.

    MathWorks. Bayesian optimization algorithm. http://fr.mathworks.com/help/stats/bayesian-optimization-algorithm.htmlhttp://fr.mathworks.com/help/stats/bayesian-optimization-algorithm.html.

  38. 38.

    Schuldt C, Laptev I, Caputo B. Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004. IEEE; 2004. https://doi.org/10.1109/icpr.2004.1334462.

  39. 39.

    Dalal N, Triggs B. 2005 IEEE computer society conference on computer vision and pattern recognition (CVPR). IEEE; 2005. https://doi.org/10.1109/cvpr.2005.177.

  40. 40.

    Bahi HE, Mahani Z, Zatni A, Saoud S. 2015. A robust system for printed and handwritten character recognition of images obtained by camera phone. Tech. rep. http://www.wseas.org/multimedia/journals/signal/2015/a045714-403.pdf.

  41. 41.

    Pearson K. The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science 1901;2(11): 559. https://doi.org/10.1080/14786440109462720.

    Article  Google Scholar 

  42. 42.

    Hotelling H. J Educ Psychol 1933;24(6):417. https://doi.org/10.1037/h0071325.

    Article  Google Scholar 

  43. 43.

    Smith LI. 2002. A tutorial on principal components analysis. Tech rep.

  44. 44.

    Shi Y, Zeng W, Huang T, Wang Y. 2015 IEEE international conference on multimedia and expo (ICME). IEEE; 2015. https://doi.org/10.1109/icme.2015.7177461.

Download references

Funding

This work was financially supported by AFOSR (grant nos. FA-9550-15-1-0279 and FA-9550-17-1-0072), the Région Grand-Est, and the Volkswagen Foundation via the NeuroQNet Project.

Author information

Affiliations

Authors

Corresponding author

Correspondence to Piotr Antonik.

Ethics declarations

Conflict of Interest

The authors declare that they have no conflict of interest

Additional information

Ethical Approval

This article does not contain any studies with human participants or animals performed by any of the authors

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This article belongs to the Topical Collection: Trends in Reservoir Computing

Guest Editors: Claudio Gallicchio, Alessio Micheli, Simone Scardapane, Miguel C. Soriano

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Antonik, P., Marsal, N., Brunner, D. et al. Bayesian Optimisation of Large-scale Photonic Reservoir Computers. Cogn Comput (2021). https://doi.org/10.1007/s12559-020-09732-6

Download citation

Keywords

  • Bayesian optimisation
  • Photonic reservoir computing
  • Large-scale networks
  • Hyper-parameter optimisation