Advertisement

IFAE 2007 pp 309-313 | Cite as

CDF Computing Experience: an Overview

  • Gabriele Compostella
Conference paper

Abstract

The Collider Detector at Fermilab (CDF) [1] is an experiment at Tevatron collider where protons and antiprotons collide at an energy in the center of mass of 1.96TeV. Tevatron current instantaneous luminosity reached 2.9 × 1032 cm−2 s−1, the highest luminosity reached by an hadronic collider as today; this has provided CDF an integrated luminosity of about 2.7 fb−1. Such an integrated luminosity corresponds to almost 4 × 109 events that have to be processed and made available to the collaboration for physics analysis in a fast and efficient way. At least the same amount of Monte Carlo data is also needed to perform high precision physics measurements or to search for new phenomena. The problem of being able to process, analyze and produce such a large amount of real and simulated data was first addressed in 2001, when CDF computing model was designed. It was based on a dedicated farm, called CAF [2], hosted at Fermi National Laboratory (FNAL), to wich were soon added some dCAFs (distributed CAFs) located at several CDF institutions around the World.

Keywords

Grid Resource Work Node Proxy Cache Monte Carlo Data Grid Site 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    CDF Collaboration: The CDF II Technical Design report, FERMILAB-Pub-96/390-E (1996)Google Scholar
  2. 2.
    M. Casarsa, S.C. Hsu, E. Lipeles, M. Neubauer, S. Sarkar, I. Sfiligoi, F. Wuerthwein: The Cdf Analysis Farm, AIP Conf. Proc. 794, 275 (2005)CrossRefADSGoogle Scholar
  3. 3.
    I. Terekhov et al.: Distributed data access and resource management in the D0 SAM system, FERMILAB-CONF-01-101Google Scholar
  4. 4.
    Kerberos Web site, http://web.mit.edu/Kerberos/Google Scholar
  5. 5.
    FBSNG Web site, Next Generation of FBS http://www-isd.fnal.gov/fbsng/Google Scholar
  6. 6.
    I. Sfiligoi et al.: The Condor based CDF CAF, Presented CHEP04, Interlaken Switzerland, Sept 27–Oct 1, 2004, 390 (2004)Google Scholar
  7. 7.
    S. Sarkar, I. Sfiligoi et al.: GlideCAF — A Late binding approach to the Grid, Presented at Computing in High Energy and Nuclear Physics, Mumbay, India, Feb 13–17, 2006, 147 (2006)Google Scholar
  8. 8.
    S.C. Hsu, E. Lipeles, M. Neubauer, M. Norman, S. Sarkar, I. Sfiligoi, F. Wuerthwein: OSG-CAF — A single point of submission for CDF to the Open Science Grid, Presented at Computing in High Energy and Nuclear Physics, Mumbay, India, Feb 13–17, 2006, 140 (2006)Google Scholar
  9. 9.
    M. Livny, S. Son: Cluster Computing and the Grid, Proceedings. CCGrid 2003. 3rd IEEE/ACM International Symposium, 542–549 (2003)Google Scholar
  10. 10.
    LcgCAF F. Delli Paoli, A. Fella, D. Jeans, D. Lucchesi et al.: LcgCAF — The CDF portal to the gLite Middleware, Presented at Computing in High Energy and Nuclear Physics, Mumbay, India, Feb 13–17, 2006, 148 (2006)Google Scholar
  11. 11.
    E. Laure et al.: Programming the Grid with gLite, EGEE-TR-2006-001 (2006)Google Scholar
  12. 12.
    S. Kosyakov et al.: Frontier: High Performance Database Access Using Standard Web Components, Presented CHEP04, Interlaken Switzerland, Sept 27–Oct 1, 2004, 204 (2004)Google Scholar
  13. 13.
    C. Moretti, I. Sfiligoi, D. Thain: Transparently Distributing CDF Software with Parrot, Presented at Computing in High Energy and Nuclear Physics, Mumbay, India, Feb 13–17, 2006, 26 (2006)Google Scholar

Copyright information

© Springer-Verlag Italia 2008

Authors and Affiliations

  • Gabriele Compostella
    • 1
  1. 1.University of Trento and INFNPovo (TN)Italy

Personalised recommendations