Skip to main content

High Performance Computing in Engineering and Science

  • Conference paper
Book cover Computational Science and High Performance Computing
  • 534 Accesses

Summary

High Performance Computing (HPC) has left the realm of large laboratories and centers and has become a central part in simulation in engineering and science. We summarize the basic problems and describe the state of the art. A concept for an integrated approach is presented. This covers hardware and software aspects. Examples are presented to show the potential of an HPC workbench for engineering and science.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 219.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Accelerated Strategic Computing Initiative (ASCI) http://www.llnl.gov/asci/

    Google Scholar 

  2. Ray U (2003) The EDM Strategy of Mercedes Car Group Development, DaimlerChrysler Electronic Datamanagement Forum 2003-Global Engineering, Böoblingen, Germany

    Google Scholar 

  3. Resch M, Bönisch T, Berger H (1997) Performance of MPI on a Cray T3E. In: Third European CRAY-SGI MPP Workshop, Paris, France

    Google Scholar 

  4. Turcotte LH (1993) A Survey of Software Environments for Exploiting Networked Computing Resources. Report MSU-EIRS-ERC-93-2. NSF Engineering Research Center for Computational Field Simulation, Mississippi State University, Starkville, MS

    Google Scholar 

  5. Riesen R, Brightwell R, Fisk LA, Hudson T, Otto J (1999) Cplant. In: Proceedings of the Second Extreme Linux Workshop, Monterey, California

    Google Scholar 

  6. Real World Computing Project http://www.rwcp.or.jp/home-E.html

    Google Scholar 

  7. Sato M, Tanaka Y, Matsuda M, Kubota K (1998) COMPaS: A Pentium Pro PC-based SMP Cluster. In: Proceeedings of the 1998 RWC Symposium (RWC Technical report, TR-98001)

    Google Scholar 

  8. Nishimura S, Kudoh T, Nishi H, Harasawa K, Matsudaira N, Akutsu S, Tasyo K, Amano H (1999) A network switch using otpical interconnection for high performance parallel computing using PCs. In: Proceedings of the Sixth International Conference on Parallel Interconnects, Anchorage

    Google Scholar 

  9. Resch M (2002) Clusters in Grids: Power plants for CFD, In: Wilders P, Ecer A, Periaux J, Satofuka N, Fox P. (eds) Parallel Computational Fluid Dynamics. Practice and Theory. Elsevier, North-Holland

    Google Scholar 

  10. Joseph E, Kaumann N, Willard CG (2003) The AMD Opteron Processor: A New Alternative for Technical Computing, White Paper, IDC, November

    Google Scholar 

  11. Myrinet http://www.myri.com/myrinet/overview/index.html

    Google Scholar 

  12. Prylli L, Tourancheau B, Westrelin R (1999) The Design for a High Performance MPI Implementation on the Myrinet Network, In: Dongarra J et al. (eds). Recent Advances in Parallel Virtual Machine and Message Passing Interface. Proceedings of the 6th European PVM/MPI Users’ Group Meeting, EuroPVM/MPI’99, LNCS 1697, Barcelona, Spain

    Google Scholar 

  13. Quadrics http://www.quadrics.com/

    Google Scholar 

  14. Infiniband http://www.infinibandta.org/home

    Google Scholar 

  15. The Earth Simulator Project http://www.es.jamstec.go.jp/

    Google Scholar 

  16. TOP 500 list http://www.top500.org

    Google Scholar 

  17. Oak Ridge National Laboratories http://www.csm.ornl.gov/PR/OR02-25-03.html

    Google Scholar 

  18. Fahey M, White J (2003) DOE Ultrascale Evaluation Plan of the Cray X1, Cray User Group Meeting 2003, Columbus, Ohio, USA

    Google Scholar 

  19. Koblenz B (2003) Cray Red Storm, Cray User Group Meeting 2003, Columbus, Ohio, USA

    Google Scholar 

  20. (1995) MPI Forum MPI: A Message-Passing Interface Standard. Document for a Standard Message-Passing Interface, University of Tennessee

    Google Scholar 

  21. (1997) MPI Forum MPI2: Extensions to the Message-Passing Interface Standard. Document for a Standard Message-Passing Interface, University of Tennessee

    Google Scholar 

  22. OpenMP Standard Definition http://www.openmp.org/

    Google Scholar 

  23. Foster I, Kesselmann C, Tuecke S (2001) Int J Supercomp Appl 15(3)

    Google Scholar 

  24. Catlett C, Smarr L (1992) Metacomputing, Comm ACM 35(6):44–52

    Article  Google Scholar 

  25. Allen G, Dramlitsch T, Foster I, Karonis N.T, Ripeanu M, Seidel E, Toonen B (2001) Supporting Efficient Execution in Heterogeneous Distributed Computing Environments with Cactus and Globus. In: Supercomputing 2001, Denver, USA

    Google Scholar 

  26. Gabriel E, Lange M, Rühle R (2001) Direct Numerical Simulation of Turbulent Reactive Flows in a Metacomputing Environment. In: Proceedings of the 2001 ICPP Workshops

    Google Scholar 

  27. Barberou N, Garbey M, Hess M, Resch M, Rossi T, Toivanen J, Tromeur-Dervout D (2003) J Parall Distr Comp 63(5):564–577

    Article  MATH  Google Scholar 

  28. Barberou N, Garbey M, Hess M, Resch M, Toivanen J, Rossi T, Tromeur-Dervout D (2002) Aitken-Schwarz method for efficient metacomputing of elliptic equations. In: Proceedings of the Fourteenth Domain Decomposition meeting in Cocoyoc, Mexico

    Google Scholar 

  29. Bönisch T.B, Rühle R (2001) Efficient Flow Simulation with Structured Multiblock Meshes on Current Supercomputers. In: ERCOFTAC Bulletin No. 50: Parallel Computing in CFD

    Google Scholar 

  30. Pickles SM, Brooke JM, Costen FC, Gabriel E, Müller M, Resch M, Ord SM (2001) Future Generation Comp Syst 17:911–918

    Article  MATH  Google Scholar 

  31. Fagg GE, London KS, Dongarra JJ (1998) MPI Connect Managing Heterogeneous MPI Applications Interoperation and Process Control, In: Alexandrov V, Dongarra J (eds) Recent advances in Parallel Virtual Machine and Message Passing Interface, LNCS 1497, Springer

    Google Scholar 

  32. Gabriel E, Resch M, Beisel T, Keller R (1998) Distributed Computing in a Heterogeneous Computing Environment, In: Alexandrov V, Dongarra J (eds) Recent advances in Parallel Virtual Machine and Message Passing Interface, LNCS 1497, Springer

    Google Scholar 

  33. Imamura T, Tsujita Y, Koide H, Takemiya H (2000) An Architecture of Stampi: MPI Library on a Cluster of Parallel Computers, In: Dongarra J, Kacsuk P, Podhorszki N (eds) Recent Advances in Parallel Virutal Machine and Message Passing Interface, LNCS 1908, Springer 200–207

    Google Scholar 

  34. Karonis N, Toonen B. MPICH-G2, http://www.niu.edu/mpi

    Google Scholar 

  35. Mindterm Secure Shell http://www.mindbright.se

    Google Scholar 

  36. Almond J, Snelling D (1998) UNICORE: Secure and Uniform Access to Distributed Resources, http://www.unicore.org, A White Paper, October

    Google Scholar 

  37. Brunst H, Winkler M, Nagel WE, Hoppe H-C (2001) Performance optimization for large scale computing: The scalable vampir approach, In: Alexandrov VN, Dongarra JJ, Juliano BA, Renner RS, Tan CK (eds) Computational Science — ICCS 2001, Part II, LNCS 2074, Springer

    Google Scholar 

  38. Brunst H, Gabriel E, Lange M, Müller MS, Nagel WE, Resch MM (2003) Performance Analysis of a Parallel Application in the GRID. In: ICCS Workshop on Grid Computing for Computational Science, St. Petersburg, Russia

    Google Scholar 

  39. Casanova H, Dongarra J (1997) Int J Supercomp Appl High Perf Comp 11(3):212–223

    Google Scholar 

  40. Girona S, Labarta J, Badia RM (2000) Validation of Dimemas communication model for MPI collective communications, In: Dongarra J, Kacsuk P, Podhorszki N (eds) Recent Advances in Parallel Virutal Machine and Message Passing Interface, LNCS 1908, Springer

    Google Scholar 

  41. Hackenberg MG, Redler R, Post P, Steckel B (2000) MpCCI, multidisciplinary applications and multigrid, Proceedings ECCOMAS 2000, CIMNE, Barcelona

    Google Scholar 

  42. Lindner P, Currle-Linde N, Resch MM, Gabriel E (2002) Distributed Application Management in Heterogeneous Grids. In: Proceedings of the Euroweb Conference, Oxford, UK

    Google Scholar 

  43. Müller M, Gabriel E, Resch M (2002) A Software Development Environment for Grid-Computing, Concurrency Comput Pract Exp 14:1543–1551

    Article  MATH  Google Scholar 

  44. Gabriel E, Keller R, Lindner P, Müller MS, Resch MM (2003) Software Development in the Grid: The DAMIEN tool-set. In: International Conference on Computational Science, St. Petersburg, Russia

    Google Scholar 

  45. EUROGRID http://www.eurogrid.org

    Google Scholar 

  46. DAMIEN — Distributed Application and Middleware for Industrial Use of European Networks, http://www.hlrs.de/organization/pds/projects/damien

    Google Scholar 

  47. Resch MM, Müller M, Küster U, Lang U (2003) A Workbench for Teraflop Supercomputing. In: Supercomputing in Nuclear Applications 2003, Paris, France

    Google Scholar 

  48. The Lustre Project http://www.lustre.org

    Google Scholar 

  49. Lang U, Peltier JP, Christ P, Rill S, Rantzau D, Nebel H, Wierse A, Lang R, Causse S, Juaneda F, Grave M, Haas P (1995) Fut Gen Comp Sys 11:419–430

    Article  Google Scholar 

  50. Garbey M, Resch MM, Vassilevski Y, Sander B, Pless D, Fleiter TR (2002) Stent Graft Treatment Optimization in a Computer Guided Simulation Environment. In: The Second Joint Meeting of the IEEE Engineering in Medicine and Biology Society and Biomedical Engineering Society, Houston, TX, USA

    Google Scholar 

  51. Resch MM, Garbey M, Sander B, Küster U (2002) Blood flow simulation in a GRID environment. In: Parallel CFD Conference, Kansai, Japan

    Google Scholar 

  52. Sander B, Küster U, Resch MM (2002) Towards a Transient Blood Flow Simulation with Fluid Structure Interaction, In: Valafar F et al. (eds) Proceedings of the 2002 International Conference on Mathematics and Engineering Techniques in Medicine and Biological Sciences METMBS’02, CSREA Press

    Google Scholar 

  53. Perktold K, Peter RO, Resch M, Langs G (1991) J Biomed Eng 13(6):507–515

    Google Scholar 

  54. Sander B, Pless D, Fleiter TR, Resch MM (2001) Computational Fluid Dynamics (CFD): coupled solving of CFD and structural mechanics in aneurysms and stentgrafts having regard to the elastic behaviour of the aortic wall and varying positions of the stentgraft. In: 9th Annual Medicine Meets Virtual Reality Medical Conference, Newport Beach/California

    Google Scholar 

  55. Adamidis P, Resch MM (2003) Parallel Coupled Thermomechanical Simulation using Hybrid Domain Decomposition. In: The 2003 international conference on computational science and its application (ICCSA), 2003, Montreal, Canada

    Google Scholar 

  56. Rieger H, Fornasier L, Haberhauer S, Resch MM (1996) Pilot Implementation of an Aerospace Design System into a Parallel User Simulation Environment, In: Liddell H, Colbrok A, Hertzberger P, Sloot P (eds), LNCS 1067, Springer

    Google Scholar 

  57. Kimura T, Takemiya H (1998) Local Area Metacomputing for Multidisciplinary Problems: A Case Study for Fluid/Structure Coupled Simulation. In: 12th ACM International Conference on Supercomputing, Melbourne/Australia

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2005 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Resch, M. (2005). High Performance Computing in Engineering and Science. In: Krause, E., Shokin, Y.I., Resch, M., Shokina, N. (eds) Computational Science and High Performance Computing. Notes on Numerical Fluid Mechanics and Multidisciplinary Design (NNFM), vol 88. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-32376-7_2

Download citation

  • DOI: https://doi.org/10.1007/3-540-32376-7_2

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-24120-1

  • Online ISBN: 978-3-540-32376-1

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics