Skip to main content

Computing Services for LHC: From Clusters to Grids

  • Chapter
  • First Online:
From the Web to the Grid and Beyond

Part of the book series: The Frontiers Collection ((FRONTCOLL))

Abstract

This chapter traces the development of the computing service for the Large Hadron Collider (LHC) at CERN data analysis over the 10 years prior to the start-up of the accelerator. It explores the main factors that influenced the choice of technology, a data intensive computational Grid, provides a brief explanation of the fundamentals of Grid computing, and records some sof the technical and organisational challenges that had to be overcome to achieve the capacity, performance, and usability requirements of the LHC experiments.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 54.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    A gram of protons contains 6 ×1023 protons; a proton accelerated at the LHC energy of 14 TeV (14 ×1015 electron-Volts) acquires very approximately the kinetic energy of a fly.

  2. 2.

    The first full PC-based batch services at CERN were introduced in March 1997 using Windows NT, but this was rapidly superseded by the first Linux PC service opened in August of the same year.

  3. 3.

    During the first half of 2010 the infrastructure services of the EGEE project were absorbed into a successor project, the European Grid Infrastructure.

  4. 4.

    Sites in the Nordic countries are connected via the Nordic Data Grid Facility [14], and sites in Canada through a local Grid infrastructure.

  5. 5.

    Examples include job scheduling and data catalogue management.

  6. 6.

    For a detailed discussion on virtualisation see Chap. 6.

References

  1. Baud, J.P., et al.: SHIFT, The Scalable Heterogeneous Integrated Facility for HEP Computing. In: Proceedings of International Conference on Computing in High Energy Physics, Tsukuba, Japan, March 1991, Universal Academy Press, Tokyo

    Google Scholar 

  2. Bethke, S. (Chair), Calvetti, M. , Hoffmann, H.F., Jacobs, D., Kasemann, M., Linglin, D.: Report of the Steering Group of the LHC Computing Review. CERN/LHCC/2001-004, 22 February 2001

    Google Scholar 

  3. Bird, I. (ed.): Baseline Services Working Group Report. CERN-LCG-PEB-2005-09. http://lcg.web.cern.ch/LCG/peb/bs/BSReport-v1.0.pdf

  4. Carminati, F. (ed.): Common Use Cases for a HEP Common Application Layer. May 2002. CERN-LHC-SC2-20-2002

    Google Scholar 

  5. Defanti, T.A., Foster, I., Papka, M.E., Stevens, R., Kuhfuss, T.: Overview of the I-WAY: Wide-Area Visual Supercomputing. Int. J. Supercomput. Appl. High Perform. Comput. 10(2/3), (Summer - Fall 1996), 123–131

    Google Scholar 

  6. Delfino, M., Robertson, L. (eds.): Solving the LHC Computing Challenge: A Leading Application of High Throughput Computing Fabrics combined with Computational Grids. Technical Proposal. CERN-IT-DLO-2001-03. http://lcg.web.cern.ch/lcg/peb/Documents/CERN-IT-DLO-2001-003.doc(version1.1)

  7. Elmsheuser, J., et al.: Distributed analysis using GANGA on the EGEE/LCG infrastructure. J. Phys.: Conf. Ser. 119 072014 (8pp) (2008)

    Google Scholar 

  8. Enabling Grids for E-Science. Information Society Project INFSO-RI-222667. http://www.eu-egee.org/fileadmin/documents/publications/EGEEIII_Publishable_summary.pdf

  9. Ernst, M., Fuhrmann, P., Gasthuber, M., Mkrtchyan, T., Waldmann, C.: dCache – a distributed storage data caching system. In: Proceedings of Computing in High Energy and Nuclear Physics 2001, Beijing, China. Science Press, New York

    Google Scholar 

  10. Foster, D.G. (ed.): LHC Tier-0 to Tier-1 High-Level Network Architecture. CERN 2005. https://www.cern.ch/twiki/bin/view/LHCOPN/LHCopnArchitecture/LHCnetworkingv2.dgf.doc

  11. Foster, I., Kesselman, C.: Globus: A Metacomputing Infrastructure Toolkit. Int. J. Supercomput. Appl. 11(2), 115–128 (1997). http://www.globus.org/alliance/publications/papers.php#globus

  12. Foster, I., Kesselman, C.: The Grid: Blueprint for a new computing infrastructure. Morgan Kaufmann, San Francisco (1999). ISBN: 1-558660-475-8

    Google Scholar 

  13. Grid Physics Network – GriPhyN Project. http://www.mcs.anl.gov/research/project_detail.php?id=11

  14. NorduGrid. http://www.nordugrid.org/about.html

  15. Knobloch, J. (ed.): The LHC Computing Grid Technical Design Report, CERN June 2005. CERN-LHCC-2005-024. ISBN 92-9083-253-3

    Google Scholar 

  16. Laure, E., et al.: Programming the Grid with gLite. EGEE Technical Report EGEE-TR-2006- 001- http://cdsweb.cern.ch/search.py?p=EGEE-TR-2006-001

  17. LHC Computing Grid goes Online - CERN Press Release. 29 September 2003. http://press.web.cern.ch/press/PressReleases/Releases2003/PR13.03ELCG-1.html

  18. Light weight Disk Pool Manager status and plans. Jean-Philippe Baud, CERN - EGEE 3 Conference Athens April 2005. https://svnweb.cern.ch/trac/lcgdm/wiki/Dpm

  19. Lo Presti, G. , Barring, O. , Earl, A. , Garcia Rioja, R.M., Ponce, S., Taurelli, G., Waldron, D., Coelho Dos Santos, M.: CASTOR: A Distributed Storage Resource Facility for High Performance Data Processing at CERN. In: Proceedings of the 24th IEEE Conference on Mass Storage Systems and Technologies, pp. 275–280. IEEE Computer Society (2007)

    Google Scholar 

  20. Newman, H. (ed.): Models of Networked Analysis at Regional Centres for LHC Experiments (MONARC) Phase 2 Report. CERN 24 March 2000. CERN/LCB 2000-001

    Google Scholar 

  21. Proposal for Building the LHC Computing Environment at CERN. CERN/2379/Rev. 5 September 2001. http://cdsweb.cern.ch/record/35736/files/CM-P00083735-e.pdf

  22. The Condor Project. http://www.cs.wisc.edu/condor

  23. The European Data Grid Project - European Commission Information Society Project IST- 2000-25182. http://eu-datagrid.web.cern.ch/eu-dataGrid/Intranet_Home.htm

  24. The European Grid Infrastructure. http://www.egi.eu/

  25. The Globus Alliance. http://www.globus.org/

  26. The Open Science Grid. http://www.openscienceGrid.org/

  27. The Particle Physics Data Grid. http://ppdg.net/

  28. Storage Resource Managers: Recent International Experience on Requirements and Multiple Co-Operating Implementations, Arie Shoshani et al., pp. 47–59, 24th IEEE Conference on Mass Storage Systems and Technologies (MSST 2007), 2007

    Google Scholar 

  29. The Virtual Data Toolkit. http://vdt.cs.wisc.edu/

  30. Scalla: Scalable Cluster Architecture for Low Latency Access Using xrootd and olbd Servers - http://xrootd.slac.stanford.edu/papers/Scalla-Intro.pdf

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Les Robertson .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Robertson, L. (2011). Computing Services for LHC: From Clusters to Grids. In: Brun, R., Carminati, F., Galli Carminati, G. (eds) From the Web to the Grid and Beyond. The Frontiers Collection. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-23157-5_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-23157-5_3

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-23156-8

  • Online ISBN: 978-3-642-23157-5

  • eBook Packages: Physics and AstronomyPhysics and Astronomy (R0)

Publish with us

Policies and ethics