Skip to main content

Offsite Autotuning Approach

Performance Model Driven Autotuning Applied to Parallel Explicit ODE Methods

  • Conference paper
  • First Online:
High Performance Computing (ISC High Performance 2020)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 12151))

Included in the following conference series:

Abstract

Autotuning (AT) is a promising concept to minimize the often tedious manual effort of optimizing scientific applications for a specific target platform. Ideally, an AT approach can reliably identify the most efficient implementation variant(s) for a new platform or new characteristics of the input by applying suitable program transformations and analytic models. In this work, we introduce Offsite, an offline AT approach that automates this selection process at installation time by rating implementation variants based on an analytic performance model without requiring time-consuming runtime tests. From abstract multilevel description languages, Offsite automatically derives optimized, platform-specific and problem-specific code of possible variants and applies the performance model to these variants.

We apply Offsite to parallel numerical methods for ordinary differential equations (ODEs). In particular, we investigate tuning a specific class of explicit ODE solvers, PIRK methods, for four different initial value problems (IVPs) on three different shared-memory systems. Our experiments demonstrate that Offsite can reliably identify the set of most efficient implementation variants for different given test configurations (ODE solver, IVP, platform) and effectively handle important AT scenarios.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Change history

  • 15 June 2020

    The original version of chapters 17 and 24 were previously published non-open access. They have now been made open access under a CC BY 4.0 license and the copyright holder has been changed to ‘The Author(s).’ The book has also been updated with the change.

    The chapters 19 and 25 were inadvertently published open access. This has been corrected and the chapters are now non-open access.

Notes

  1. 1.

    YAML is a data serialization language; https://yaml.org.

  2. 2.

    There are scalable ODE systems but also ODEs with a fixed size [10].

  3. 3.

    For example files, we refer to https://github.com/RRZE-HPC/kerncraft.

  4. 4.

    In this work, version 0.8.3 of the Kerncraft tool was used.

  5. 5.

    The ECM prediction factors in the location of data in the memory hierarchy. As a simplified assumption—neglecting overlapping effects at cache borders—, this means that as long as data locations do not change, the ECM model yields the same value for a kernel independent from the actual n.

  6. 6.

    Step-up time is the same for OffPre5 and OffPre10 as determining their set of considered variants is carried out by the same single database operation.

References

  1. Ansel, J., et al.: OpenTuner: an extensible framework for program autotuning. In: Proceedings of the 23rd International Conference on Parallel Architecture and Compilation Techniques, PACT 2014, pp. 303–316. ACM, August 2014. https://doi.org/10.1145/2628071.2628092

  2. Balaprakash, P., et al.: Autotuning in high-performance computing applications. Proc. IEEE 106(11), 2068–2083 (2018). https://doi.org/10.1109/JPROC.2018.2841200

    Article  Google Scholar 

  3. Barthel, A., Günther, M., Pulch, R., Rentrop, P.: Numerical techniques for different time scales in electric circuit simulation. In: Breuer, M., Durst, F., Zenger, C. (eds.) High Performance Scientific and Engineering Computing, pp. 343–360. Springer, Heidelberg (2002). https://doi.org/10.1007/978-3-642-55919-8_38

    Chapter  Google Scholar 

  4. Bilmes, J., Asanovic, K., Chin, C.W., Demmel, J.: Optimizing matrix multiply using PHiPAC: a portable, high-performance, ANSI C coding methodology. In: Proceedings 11th International Conference on Supercomputing, ICS 1997, pp. 340–347. ACM, July 1997. https://doi.org/10.1145/263580.263662

  5. Calvo, M., Franco, J.M., Randez, L.: A new minimum storage runge-kutta scheme for computational acoustics. J. Comput. Phys. 201(1), 1–12 (2004). https://doi.org/10.1016/j.jcp.2004.05.012

    Article  MathSciNet  Google Scholar 

  6. Christen, M., Schenk, O., Burkhard, L.: PATUS: a code generation and autotuning framework for parallel iterative stencil computations on modern microarchitectures. In: 2011 IEEE International Parallel Distributed Processing Symposium, pp. 676–687, May 2015. https://doi.org/10.1109/IPDPS.2011.70

  7. Das, S., Mullick, S.S., Suganthan, P.N.: Recent advances in differential evolution - an updated survey. Swarm Evol. Comput. 27, 1–30 (2016). https://doi.org/10.1016/j.swevo.2016.01.004

    Article  Google Scholar 

  8. Denoyelle, N., Goglin, B., Ilic, A., Jeannot, E., Soussa, L.: Modeling non-uniform memory access on large compute nodes with the cache-aware roofline model. IEEE T. Parall. Distr. 30(6), 1374–1389 (2019). https://doi.org/10.1109/TPDS.2018.2883056

    Article  Google Scholar 

  9. Gerndt, M., César, E., Benkner, S. (eds.): Automatic Tuning of HPC Applications - The Periscope Tuning Framework. Shaker Verlag (2015)

    Google Scholar 

  10. Hairer, E., Wanner, G.: Solving Ordinary Differential Equations II: Stiff and Differential-Algebraic Problems. Springer, Heidelberg (2002). 2nd rev. edn

    Google Scholar 

  11. Hammer, J., Eitzinger, J., Hager, G., Wellein, G.: Kerncraft: a tool for analytic performance modeling of loop kernels. In: Niethammer, C., Gracia, J., Hilbrich, T., Knüpfer, A., Resch, M.M., Nagel, W.E. (eds.) Tools for High Performance Computing 2016, pp. 1–22. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-56702-0_1

    Chapter  Google Scholar 

  12. Hofmann, J., Alappat, C., Hager, G., Fey, D., Wellein, G.: Bridging the Architecture Gap: Abstracting Performance-Relevant Properties of Modern Server Processors (2019). https://arxiv.org/abs/1907.00048, Preprint

  13. van der Houwen, P.J., Sommeijer, B.P.: Parallel iteration of high-order runge-kutta methods with stepsize control. J. Comput. Appl. Math. 29(1), 111–127 (1990). https://doi.org/10.1016/0377-0427(90)90200-J

    Article  MathSciNet  Google Scholar 

  14. Mazzia, F., Magherini, C.: Test Set for Initial Value Problem Solvers, Release 2.4, February 2008. https://archimede.dm.uniba.it/~testset/

  15. Mendis, C., Renda, A., Amarasinghe, S., Carbin, M.: Ithemal: accurate, portable and fast basic block throughput estimation using deep neural networks. In: Proceedings of the 36th International Conference on Machine Learning. Proceedings of the Machine Learning Research, vol. 97, pp. 4505–4515. PMLR, June 2019

    Google Scholar 

  16. Pfaffe, P., Grosser, T., Tillmann, M.: Efficient hierarchical online-autotuning: a case study on polyhedral accelerator mapping. In: Proceedings of the ACM International Conference on Supercomputing, ICS 2019, pp. 354–366. ACM, New York (2019). https://doi.org/10.1145/3330345.3330377

  17. Rasch, A., Gorlatch, S.: ATF: a generic directive-based auto-tuning framework. Concurr. Comput. Pract. Exper. 31(5), e4423 (2019). https://doi.org/10.1002/cpe.4423

    Article  Google Scholar 

  18. Scherg, M., Seiferth, J., Korch, M., Rauber, T.: Performance prediction of explicit ODE methods on multi-core cluster systems. In: Proceedings of the 2019 ACM/SPEC International Conference on Performance Engineering, ICPE 2019, pp. 139–150. ACM (2019). https://doi.org/10.1145/3297663.3310306

  19. Seiferth, J., Alappat, C., Korch, M., Rauber, T.: Applicability of the ECM performance model to explicit ode methods on current multi-core processors. In: Yokota, R., Weiland, M., Keyes, D., Trinitis, C. (eds.) ISC High Performance 2018. LNCS, vol. 10876, pp. 163–183. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-92040-5_9

    Chapter  Google Scholar 

  20. Shudler, S., Vrabec, J., Wolf, F.: Understanding the scalability of molecular simulation using empirical performance modeling. In: Bhatele, A., Boehme, D., Levine, J.A., Malony, A.D., Schulz, M. (eds.) ESPT/VPA 2017-2018. LNCS, vol. 11027, pp. 125–143. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-17872-7_8

    Chapter  Google Scholar 

  21. Stengel, H., Treibig, J., Hager, G., Wellein, G.: Quantifying performance bottlenecks of stencil computations using the execution-cache-memory model. In: Proceedings of the 29th ACM International Conference on Supercomputing, pp. 207–216. ICS 2015. ACM (2015). https://doi.org/10.1145/2751205.2751240

  22. Tiwari, A., Hollingsworth, J.K.: Online adaptive code generation and tuning. In: Proceedings of the 2011 IEEE International Parallel Distributed Processing Symposium, IPDPS 2011, pp. 879–892. IEEE, May 2011. https://doi.org/10.1109/IPDPS.2011.86

  23. Whaley, R.C., Petitet, A., Dongarra, J.: Automated empirical optimizations of software and the ATLAS project. Parallel Comput. 27(1), 3–35 (2001). https://doi.org/10.1016/S0167-8191(00)00087-9

    Article  Google Scholar 

  24. Williams, S., Waterman, A., Patterson, D.: Roofline: an insightful visual performance model for multicore architectures. Commun. ACM 52(4), 65–76 (2009). https://doi.org/10.1145/1498765.1498785

    Article  Google Scholar 

  25. Yount, C., Tobin, J., Breuer, A., Duran, A.: YASK - yet another stencil kernel: a framework for HPC stencil code-generation and tuning. In: 2016 6th International Workshop on Domain-Specific Languages and High-Level Frameworks for High Performance Computing, WOLFHPC, pp. 30–39. IEEE, November 2016. https://doi.org/10.1109/WOLFHPC.2016.08

Download references

Acknowledgments

This work is supported by the German Ministry of Science and Education (BMBF) under project number 01IH16012A. Furthermore, we thank the Erlangen Regional Computing Center for granting access to their IvyBridge and Skylake systems.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Johannes Seiferth .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Seiferth, J., Korch, M., Rauber, T. (2020). Offsite Autotuning Approach. In: Sadayappan, P., Chamberlain, B.L., Juckeland, G., Ltaief, H. (eds) High Performance Computing. ISC High Performance 2020. Lecture Notes in Computer Science(), vol 12151. Springer, Cham. https://doi.org/10.1007/978-3-030-50743-5_19

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-50743-5_19

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-50742-8

  • Online ISBN: 978-3-030-50743-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics