Machine Learning Pipelines with Modern Big Data Tools for High Energy Physics

Abstract

The effective utilization at scale of complex machine learning (ML) techniques for HEP use cases poses several technological challenges, most importantly on the actual implementation of dedicated end-to-end data pipelines. A solution to these challenges is presented, which allows training neural network classifiers using solutions from the Big Data and data science ecosystems, integrated with tools, software, and platforms common in the HEP environment. In particular, Apache Spark is exploited for data preparation and feature engineering, running the corresponding (Python) code interactively on Jupyter notebooks. Key integrations and libraries that make Spark capable of ingesting data stored using ROOT format and accessed via the XRootD protocol, are described and discussed. Training of the neural network models, defined using the Keras API, is performed in a distributed fashion on Spark clusters by using BigDL with Analytics Zoo and also by using TensorFlow, notably for distributed training on CPU and GPU resources. The implementation and the results of the distributed training are described in detail in this work.

This is a preview of subscription content, log in to check access.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

References

  1. 1.

    Zaharia M, Chowdhury M, Franklin MJ, Shenker S, Stoica I (2010) Spark: Cluster computing with working sets. In: Proceedings of the 2Nd USENIX Conference on Hot Topics in Cloud Computing. HotCloud’10. USENIX Association, Berkeley, p 10

    Google Scholar 

  2. 2.

    Baldi P, Sadowski P, Whiteson D (2014) Searching for exotic particles in high-energy physics with deep learning. Nat Commun 5:4308

    ADS  Article  Google Scholar 

  3. 3.

    Dai J, Wang Y, Qiu X, Ding D, Zhang Y, Wang Y, Jia X, Zhang C, Wan Y, Li Z, Wang J, Huang S, Wu Z, Wang Y, Yang Y, She B, Shi D, Lu Q, Huang K, Song G (2018) BigDL: a distributed deep learning framework for big data. arXiv e-prints, arXiv:1804.05839

  4. 4.

    Nguyen TQ, Weitekamp I, Anderson Daniel D, Castello R, Cerri O, Pierini M, Spiropulu M, Vlimant J-R (2019) Topology classification with deep learning to improve real-time event selection at the LHC. Comput Softw Big Sci 3:12

    Article  Google Scholar 

  5. 5.

    Brun R, Rademakers F (1997) Root - an object oriented data analysis framework. Nucl Instrum Methods Phys Res Sect A 389(1):81–86 (New Computing Techniques in Physics Research V)

    ADS  Article  Google Scholar 

  6. 6.

    Bird I, Buncic P, Carminati F, Cattaneo M, Clarke P, Fisk I, Girone M, Harvey J, Kersevan B, Mato P, Mount R, Panzer-Steindel B (2014) Update of the computing models of the WLCG and the LHC experiments. Tech Rep CERN-LHCC-2014-014. LCG-TDR-002

  7. 7.

    Hoecker A, Speckmayer P, Stelzer J, Therhaag J, von Toerne E, Voss H, Backes M, Carli T, Cohen O, Christov A, Dannheim D, Danielowski K, Henrot-Versille S, Jachowski M, Kraszewski K, Krasznahorkay JA, Kruk M, Mahalalel Y, Ospanov R, Prudent X, Robert A, Schouten D, Tegenfeldt F, Voigt A, Voss K, Wolter M, Zemla A (2007) TMVA - toolkit for multivariate data analysis. arXiv e-prints, p. physics/0703039

  8. 8.

    Peters AJ, Janyst L (2011) Exabyte scale storage at cern. J Phys 331:12

    Google Scholar 

  9. 9.

    Khristenko V, Pivarski J (2017) diana-hep/spark-root: Release 0.1.14.

  10. 10.

    CERN-DB (2013) Hadoop-XRootD connector. https://github.com/cerndb/hadoop-xrootd

  11. 11.

    Apache Hadoop project. https://hadoop.apache.org/

  12. 12.

    Apache Parquet. https://parquet.apache.org/

  13. 13.

    Google, Protocol buffers. http://code.google.com/apis/protocolbuffers/

  14. 14.

    Scikit-learn. https://scikit-learn.org/

  15. 15.

    Keras tuner. https://keras-team.github.io/keras-tuner/

  16. 16.

    Kubernetes. https://kubernetes.io/

  17. 17.

    Chollet F et al. (2015) Keras. https://keras.io

  18. 18.

    CERN openlab. https://openlab.cern/

  19. 19.

    Analytics Zoo. https://analytics-zoo.github.io/

  20. 20.

    TensorFlow. https://www.tensorflow.org/

  21. 21.

    TF-Spawner. https://github.com/cerndb/tf-spawner

  22. 22.

    Goyal P, Dollár P, Girshick R, Noordhuis P, Wesolowski L, Kyrola A, Tulloch A, Jia Y, He K (2017) Accurate, large minibatch SGD: training ImageNet in 1 hour. arXiv e-prints arXiv:1706.02677

  23. 23.

    High Luminosity LHC Project. https://hilumilhc.web.cern.ch

Download references

Author information

Affiliations

Authors

Corresponding author

Correspondence to L. Canali.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Migliorini, M., Castellotti, R., Canali, L. et al. Machine Learning Pipelines with Modern Big Data Tools for High Energy Physics. Comput Softw Big Sci 4, 8 (2020). https://doi.org/10.1007/s41781-020-00040-0

Download citation

Keywords

  • Big Data
  • Machine Learning
  • HEP
  • Distributed computing
  • Parallel computing
  • GPU