From Mathematical Model to Parallel Execution to Performance Improvement: Introducing Students to a Workflow for Scientific Computing

  • Franziska KasielkeEmail author
  • Ronny Tschüter
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11339)


Current courses in parallel and distributed computing (PDC) often focus on programming models and techniques. However, PDC is embedded in a scientific workflow that incorporates more than programming skills. The workflow spans from mathematical modeling to programming, data interpretation, and performance analysis. Especially the last task is covered insufficiently in educational courses. Often scientists from different fields of knowledge, each with individual expertise, collaborate to perform these tasks. In this work, the general design and the implementation of an exercise within the course “Supercomputers and their programming” at Technische Universität Dresden, Faculty of Computer Science is presented. In the exercise, the students pass through a complete workflow for scientific computing. The students gain or improve their knowledge about: (i) mathematical modeling of systems, (ii) transferring the mathematical model to a (parallel) program, (iii) visualization and interpretation of the experiment results, and (iv) performance analysis and improvements. The exercise exactly aims at bridging the gap between the individual tasks of a scientific workflow and equip students with wide knowledge.


Workflow for scientific computing Teaching Parallel programming Performance analysis Heat transfer 


  1. 1.
    Childs, H., et al.: VisIt: An end-user tool for visualizing and analyzing very large data. In: High Performance Visualization-Enabling Extreme-Scale Scientific Insight, pp. 357–372 (2012)Google Scholar
  2. 2.
    Geimer, M., Saviankou, P., Strube, A., Szebenyi, Z., Wolf, F., Wylie, B.J.N.: Further improving the scalability of the scalasca toolset. In: Jónasson, K. (ed.) PARA 2010. LNCS, vol. 7134, pp. 463–473. Springer, Heidelberg (2012). Scholar
  3. 3.
    Knüpfer, A., et al.: The Vampir performance analysis tool-set. In: Resch, M., Keller, R., Himmler, V., Krammer, B., Schulz, A. (eds.) Tools for High Performance Computing, pp. 139–155. Springer, Heidelberg (2008).
  4. 4.
    Koumoutsakos, P., Chatzi, E., Krzhizhanovskaya, V.V., Lees, M., Dongarra, J., Sloot, P.M.A.: The art of computational science, bridging gaps - forming alloys. Preface for ICCS 2017. Procedia Comput. Sci. 108, 1–6 (2017)CrossRefGoogle Scholar
  5. 5.
    Mey, D., et al.: Score-P: a unified performance measurement system for petascale applications. In: Bischof, C., Hegering, H.G., Nagel, W.E., Wittum, G. (eds.) Competence in High Performance Computing. Springer, Heidelberg (2012). Scholar
  6. 6.
    MPI Forum: Message Passing Interface (MPI), May 2018.
  7. 7.
    OpenMP: The OpenMP API specification for parallel programming, May 2018.
  8. 8.
    Riedel, M., Streit, A., Wolf, F., Lippert, T., Kranzlmüller, D.: Classification of different approaches for e-science applications in next generation computing infrastructures. In: 2008 IEEE Fourth International Conference on eScience, pp. 198–205 (2008).

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Faculty of Computer ScienceTechnische Universität DresdenDresdenGermany
  2. 2.Center for Information Services and High Performance ComputingTechnische Universität DresdenDresdenGermany

Personalised recommendations