Performance and Productivity of New Programming Languages

  • Iris Christadler
  • Giovanni Erbacci
  • Alan D. Simpson
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7174)


Will HPC programmers (have to) adapt to new programming languages and parallelization concepts? Many different languages are currently discussed as complements or successors to the traditional HPC programming paradigm (Fortran/C+MPI). These include both languages designed specifically for the HPC community (e.g. the partitioned global address space (PGAS) languages UPC, CAF, X10 or Chapel) and languages that allow the use of hardware accelerators (e.g. Cn for ClearSpeed accelerator boards, CellSs for IBM CELL and GPGPU languages like CUDA, OpenCL, CAPS hmpp and RapidMind).

During the project “Partnership for Advanced Computing in Europe – Preparatory Phase” (PRACE-PP), developers across Europe have ported three benchmarks to more than 12 different programming languages and assessed both performance and productivity. Their results will help scientific groups to choose the optimal combination of language and hardware to efficiently tackle their scientific problems. This paper describes the framework used for this assessment and the results gathered during the study together with guidelines for interpretation.


High Performance Computing Benchmark Suite Hardware Accelerator High Performance Computing Application High Performance Fortran 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Simpson, A., et al.: PRACE-PP Deliverable D6.1 Identification and Cate- gorisation of Applications and Initial Benchmarks Suite,
  2. 2.
    Michielse, P., et al.: PRACE-PP Deliverable D6.3.1 Report on available perfor- mance analysis and benchmark tools, Representative Benchmark,
  3. 3.
    von Alfthan, S., et. al.: PRACE-PP Deliverable D6.5 Report on Porting and Optimisation of Applications,
  4. 4.
    Jowkar, M., et. al.: PRACE-PP Deliverable D6.4 Report on Approaches to Petascaling,
  5. 5.
    Kennedy, K., et al.: Defining and Measuring the Productivity of Programming Languages. International Journal of High Performance Computing Applications 18 (2004)Google Scholar
  6. 6.
    Cavazzoni, C., et al.: PRACE-PP Deliverable D6.6 Report on petascale softwarelibraries and programming models,
  7. 7.
    Asanovic, K., et al.: The Landscape of Parallel Computing Research: A View from Berkeley, Technical Report No. UCB/EECS-2006-183 (2006),
  8. 8.
    Che, S., et al.: Rodinia: A benchmark suite for heterogeneous computing. In: IEEE International Symposium on Workload Characterization IISWC 2009, pp. 44–54 (2009)Google Scholar
  9. 9.
    The Euroben benchmark home page,
  10. 10.
    Intel Math Kernel Library,
  11. 11.
    Carlson, W., et al.: UPC: Distributed Shared Memory Programming. Book of WileyInter-Science (2005)Google Scholar
  12. 12.
    Numrich, R.W., Reid, J.: Co-Array Fortran for Parallel Programming. ACM SIGPLAN Fortran Forum 17 (1998)Google Scholar
  13. 13.
    Dongarra, J., et al.: DARPA’s HPCS Program: History, Models, Tools, Languages. Advances in Computers (2008)Google Scholar
  14. 14.
    Chamberlain, B.L., et al.: Parallel Programmability and the Chapel Language. International Journal of High Performance Computing Applications 21 (2007)Google Scholar
  15. 15.
    X10 language,
  16. 16.
    ClearSpeed Cn language,
  17. 17.
    Perez, J.P.: CellSs: making it easier to program the cell broadband engine processor. IBM Journal of Research and Development 51 (2007)Google Scholar
  18. 18.
  19. 19.
    OpenCL - The open standard for parallel programming of heterogeneous systems,
  20. 20.
  21. 21.
    RapidMind, (forwarded to Intel ArBB)
  22. 22.
    Christadler, I., et al. (eds): PRACE Workshop New Languages and Future Technology Prototypes, Garching (2010),
  23. 23.
    Strumpen, V., et. al.: PRACE-1IP Deliverable D9.2.1 First report on multi-Peta to Exascale software,
  24. 24.
  25. 25.
    Sai Saigar, R., et al.: PRACE-PP Deliverable D8.3.2 Final technical report and architecture proposal,
  26. 26.
    Dongarra, J., et al. (eds): CT Watch Quarterly. High Productivity Computing Systems and the Path Towards Usable Petascale Computing Part A 2(4A) (2006),
  27. 27.
    Squires, S., et al.: Software Productivity Research in High-Performance Com putingi. CT Watch Quarterly 2(4A), 52–61 (2006)Google Scholar
  28. 28.
    Hochstein, L., et al.: Experiments to Understand HPC Time to Development. CT Watch Quarterly 2(4A), 24–32 (2006)Google Scholar
  29. 29.
    Christadler, I., Weinberg, V.: RapidMind: Portability across Architectures and Its Limitations. In: Keller, R., et al. (eds.) Facing the Multicore-Challenge. LNCS, vol. 6310, pp. 4–15. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  30. 30.
    Tesla M-Class GPU Computing Modules (“Fermi”),
  31. 31.

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Iris Christadler
    • 1
  • Giovanni Erbacci
    • 2
  • Alan D. Simpson
    • 3
  1. 1.Leibniz Supercomputing CentreGarchingGermany
  2. 2.CINECA Supercomputing CentreBolognaItaly
  3. 3.EPCCThe University of EdinburghUnited Kingdom

Personalised recommendations