Skip to main content

Recurrent Neural Networks with Grid Data Quantization for Modeling LHC Superconducting Magnets Behavior

  • Conference paper
  • First Online:
Information Technology, Systems Research, and Computational Physics (ITSRCP 2018)

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 945))

Abstract

This paper presents a model based on Recurrent Neural Network architecture, in particular LSTM, for modeling the behavior of Large Hadron Collider superconducting magnets. High resolution data available in Post Mortem database was used to train a set of models and compare their performance with respect to various hyper-parameters such as input data quantization and number of cells. A novel approach to signal level quantization allowed to reduce a size of the model, simplify tuning of the magnet monitoring system and make the process scalable. The paper shows that RNNs such as LSTM or GRU may be used for modeling high resolution signals with an accuracy over 0.95 and as small number of the parameters ranging from 800 to 1200. This makes the solution suitable for hardware implementation essential in the case of monitoring performance critical and high speed signal of Large Hadron Collider superconducting magnets.

This work was supported by the Faculty of Physics and Applied Computer Science and the Faculty of Computer Science, Electronics and Telecommunications of the AGH-UST statutory tasks within subsidy of the Ministry of Science and Higher Education.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Wielgosz, M., Skoczeń, A.: Recurrent Neural Networks with grid data quantization for modeling LHC superconducting magnets behavior. In: Kulczycki, P., Kowalski, P., Łukasik, S. (eds.) Contemporary Computational Science, p. 240. AGH-UST Press, Cracow (2018). http://itsrcp18.fis.agh.edu.pl/proceedings/

    Google Scholar 

  2. Brüning, O., Collier, P.: Building a behemoth. Nature 448, 285–289 (2007). https://doi.org/10.1038/nature06077

    Article  Google Scholar 

  3. Evans, L., Bryant, P.: LHC machine. J. Instrum. 3(08), S08,001 (2008). https://doi.org/10.1088/1748-0221/3/08/S08001

    Article  Google Scholar 

  4. Wenninger, J.: Machine protection and operation for LHC. CERN Yellow Report CERN-2016-002 (2016)

    Google Scholar 

  5. Bordry, F., Denz, R., Mess, K.H., Puccio, B., Rodriguez-Mateos, F., Schmidt, R.: Machine protection for the LHC: architecture of the beam and powering interlock system. LHC Project Report 521, CERN (2001). https://cds.cern.ch/record/531820/files/lhc-project-report-521.pdf

  6. Schmidt, R.: Machine protection and interlock systems for circular machines – example for LHC. CERN Yellow Report CERN-2016-002 (2016)

    Google Scholar 

  7. Ciapala, E., Rodríguez-Mateos, F., Schmidt, R., Wenninger, J.: The LHC post-mortem system. Technical report LHC-PROJECT-NOTE-303, CERN, Geneva (2002). http://cds.cern.ch/record/691828

  8. Lauckner, R.J.: What data is needed to understand failures during LHC operation. In: 11th Workshop of the LHC, Chamonix XI, pp. 278–283 (2001). CERN-SL-2001-003. https://cds.cern.ch/record/567214

  9. Borland, M.: A brief introduction to the SDDS Toolkit. Technical report, Argonne National Laboratory, USA (1998). http://www.aps.anl.gov/asd/oag/SDDSIntroTalk/slides.html

  10. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Pereira, F., Burges, C.J.C., Bottou, L., Weinberger, K.Q. (eds.) Advances in Neural Information Processing Systems 25, pp. 1097–1105. Curran Associates, Inc. (2012). http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf

  11. LeCun, Y.: Deep learning of convolutional networks. In: 2015 IEEE Hot Chips 27 Symposium (HCS), pp. 1–95 (2015). https://doi.org/10.1109/HOTCHIPS.2015.7477328

  12. Graves, A.: Neural Networks. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-24797-2

    Book  MATH  Google Scholar 

  13. Morton, J., Wheeler, T.A., Kochenderfer, M.J.: Analysis of recurrent neural networks for probabilistic modelling of driver behaviour. IEEE Trans. Intell. Transp. Syst. PP(99), 1–10 (2016). https://doi.org/10.1109/TITS.2016.2603007

    Article  Google Scholar 

  14. Pouladi, F., Salehinejad, H., Gilani, A.M.: Recurrent neural networks for sequential phenotype prediction in genomics. In: 2015 International Conference on Developments of E-Systems Engineering (DeSE), pp. 225–230 (2015). https://doi.org/10.1109/DeSE.2015.52

  15. Chen, X., Liu, X., Wang, Y., Gales, M.J.F., Woodland, P.C.: Efficient training and evaluation of recurrent neural network language models for automatic speech recognition. IEEE/ACM Trans. Audio Speech Lang. Process. 24(11), 2146–2157 (2016). https://doi.org/10.1109/TASLP.2016.2598304

    Article  Google Scholar 

  16. Greff, K., Srivastava, R.K., Koutník, J., Steunebrink, B.R., Schmidhuber, J.: LSTM: a search space odyssey. IEEE Trans. Neural Netw. Learn. Syst. 28(10), 2222–2232 (2017). https://doi.org/10.1109/TNNLS.2016.2582924

    Article  MathSciNet  Google Scholar 

  17. Wielgosz, M., Skoczeń, A., Mertik, M.: Using LSTM recurrent neural networks for detecting anomalous behavior of LHC superconducting magnets. Nucl. Instrum. Methods Phys. Res. A 867, 40–50 (2017). https://doi.org/10.1016/j.nima.2017.06.020

    Article  Google Scholar 

  18. Chung, J., Gulcehre, C., Cho, K., Bengio, Y.: Gated feedback recurrent neural networks. In: Proceedings of the 32nd International Conference on International Conference on Machine Learning - Volume 37, ICML 2015, pp. 2067–2075. JMLR.org (2015). http://dl.acm.org/citation.cfm?id=3045118.3045338

  19. Chung, J., Gulcehre, C., Cho, K., Bengio, Y.: Empirical evaluation of gated recurrent neural networks on sequence modeling. In: NIPS 2014 Workshop on Deep Learning, December 2014

    Google Scholar 

  20. Malhotra, P., Vig, L., Shroff, G., Agarwal, P.: Long short term memory networks for anomaly detection in time series. In: Proceedings of the 23rd European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, ESANN 2015, Bruges, Belgium, pp. 89–94. Presses universitaires de Louvain (2015). https://www.elen.ucl.ac.be/Proceedings/esann/esannpdf/es2015-56.pdf

  21. Marchi, E., Vesperini, F., Eyben, F., Squartini, S., Schuller, B.: A novel approach for automatic acoustic novelty detection using a denoising autoencoder with bidirectional LSTM neural networks. In: 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1996–2000 (2015). https://doi.org/10.1109/ICASSP.2015.7178320

  22. Marchi, E., Vesperini, F., Weninger, F., Eyben, F., Squartini, S., Schuller, B.: Non-linear prediction with LSTM recurrent neural networks for acoustic novelty detection. In: 2015 International Joint Conference on Neural Networks (IJCNN), pp. 1–7 (2015). https://doi.org/10.1109/IJCNN.2015.7280757

  23. Chong, Y.S., Tay, Y.H.: Abnormal event detection in videos using spatiotemporal autoencoder. In: Cong, F., Leung, A., Wei, Q. (eds.) Advances in Neural Networks - ISNN 2017, pp. 189–196. Springer, Cham (2017)

    Chapter  Google Scholar 

  24. Chang, A.X.M., Martini, B., Culurciello, E.: Recurrent neural networks hardware implementation on FPGA. CoRR abs/1511.05552 (2015). http://arxiv.org/abs/1511.05552

  25. Han, S., Kang, J., Mao, H., Hu, Y., Li, X., Li, Y., Xie, D., Luo, H., Yao, S., Wang, Y., Yang, H., Dally, W.B.J.: ESE: efficient speech recognition engine with sparse LSTM on FPGA. In: Proceedings of the 2017 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays (FPGA 2017), pp. 75–84 (2017). https://doi.org/10.1145/3020078.3021745

  26. Lee, M., Hwang, K., Park, J., Choi, S., Shin, S., Sung, W.: FPGA-based low-power speech recognition with recurrent neural networks. In: 2016 IEEE International Workshop on Signal Processing Systems (SiPS), pp. 230–235 (2016)

    Google Scholar 

  27. Strecht, P., Cruz, L., Soares, C., Mendes-Moreira, J., Abreu, R.: A comparative study of regression and classification algorithms for modelling students’ academic performance. In: Proceedings of the 8th International Conference on Educational Data Mining, EDM 2015, Madrid, Spain, 26–29 June 2015, pp. 392–395 (2015). http://www.educationaldatamining.org/EDM2015/proceedings/short392-395.pdf

  28. Chollet, F., et al.: Keras. GitHub (2015). GitHub repository. https://keras.io/getting-started/faq/#how-should-i-cite-keras

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Maciej Wielgosz .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wielgosz, M., Skoczeń, A. (2020). Recurrent Neural Networks with Grid Data Quantization for Modeling LHC Superconducting Magnets Behavior. In: Kulczycki, P., Kacprzyk, J., Kóczy, L., Mesiar, R., Wisniewski, R. (eds) Information Technology, Systems Research, and Computational Physics. ITSRCP 2018. Advances in Intelligent Systems and Computing, vol 945. Springer, Cham. https://doi.org/10.1007/978-3-030-18058-4_14

Download citation

Publish with us

Policies and ethics