Advertisement

Evaluating the Advantage of Reactive MPI-aware Power Control Policies

  • Daniele CesariniEmail author
  • Carlo Cavazzoni
  • Andrea Bartolini
Conference paper
  • 120 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12044)

Abstract

Power consumption is an essential factor that worsens the performance and costs of today and future supercomputer installations. In state-of-the-art works, some approaches have been proposed to reduce the energy consumption of scientific applications by reducing the operating frequency of the computational elements during MPI communication regions. State-of-the-art algorithms rely on the capability of predicting at execution time the duration of these communication regions before their execution. The COUNTDOWN approach tries to do the same by mean of a purely reactive timer based policy. In this paper, we compare the COUNTDOWN algorithm with state-of-the-art predictive-based algorithm, showing that timer based policies are more effective in extract power saving opportunities and reducing energy waste with a lower overhead. When running in a Tier1 system, COUNTDOWN achieves 5% more energy saving with lower overhead than state-of-the-art proactive policy. This suggests that reactive policies are more suited then proactive approaches for communication-aware power management algorithms.

Keywords

HPC MPI Power management Reactive policy DVFS NPB Energy efficiency Parallel programming 

Notes

Acknowledgment

Work supported by the EU FETHPC project ANTAREX (g.a. 671623), EU project ExaNoDe (g.a. 671578), and CINECA research grant on Energy-Efficient HPC systems.

References

  1. 1.
    Advanced Configuration and Power Interface (ACPI) Specification (2019). http://www.acpi.info/spec.htm. Accessed 29 Mar 2019
  2. 2.
    Auweter, A., et al.: A case study of energy aware scheduling on SuperMUC. In: Kunkel, J.M., Ludwig, T., Meuer, H.W. (eds.) ISC 2014. LNCS, vol. 8488, pp. 394–409. Springer, Cham (2014).  https://doi.org/10.1007/978-3-319-07518-1_25CrossRefGoogle Scholar
  3. 3.
    Bailey, D.H.: NAS parallel benchmarks. In: Padua, D. (ed.) Encyclopedia of Parallel Computing, pp. 1254–1259. Springer, Boston (2011).  https://doi.org/10.1007/978-0-387-09766-4_133CrossRefGoogle Scholar
  4. 4.
    Benini, L., Bogliolo, A., De Micheli, G.: A survey of design techniques for system-level dynamic power management. IEEE Trans. Very Large Scale Integr. (VLSI) Syst. 8(3), 299–316 (2000).  https://doi.org/10.1109/92.845896CrossRefGoogle Scholar
  5. 5.
    Bhalachandra, S., Porterfield, A., Olivier, S.L., Prins, J.F.: An adaptive core-specific runtime for energy efficiency. In: 2017 IEEE International Parallel and Distributed Processing Symposium (IPDPS), pp. 947–956, May 2017.  https://doi.org/10.1109/IPDPS.2017.114
  6. 6.
    Borghesi, A., Bartolini, A., Lombardi, M., Milano, M., Benini, L.: Predictive Modeling for job power consumption in HPC systems. In: Kunkel, J.M., Balaji, P., Dongarra, J. (eds.) ISC High Performance 2016. LNCS, vol. 9697, pp. 181–199. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-41321-1_10CrossRefGoogle Scholar
  7. 7.
    Borghesi, A., Bartolini, A., Lombardi, M., Milano, M., Benini, L.: Scheduling-based power capping in high performance computing systems. Sustain. Comput.: Inf. Syst. 19, 1–13 (2018)Google Scholar
  8. 8.
    Borghesi, A., Bartolini, A., Milano, M., Benini, L.: Pricing schemes for energy-efficient HPC systems: design and exploration. Int. J. High Perform. Comput. Appl. 33, 716–734 (2019).  https://doi.org/10.1177/1094342018814593CrossRefGoogle Scholar
  9. 9.
    Cesarini, D., Bartolini, A., Bonfà, P., Cavazzoni, C., Benini, L.: Countdown: a run-time library for application-agnostic energy saving in MPI communication primitives. In: Proceedings of the 2nd Workshop on AutotuniNg and aDaptivity AppRoaches for Energy Efficient HPC Systems, ANDARE 2018, pp. 2:1–2:6. ACM, New York (2018).  https://doi.org/10.1145/3295816.3295818
  10. 10.
    Cesarini, D., Bartolini, A., Bonfà, P., Cavazzoni, C., Benini, L.: COUNTDOWN - three, two, one, low power! A run-time library for energy saving in MPI communication primitives. CoRR abs/1806.07258 (2018). http://arxiv.org/abs/1806.07258
  11. 11.
    David, H., Gorbatov, E., Hanebutte, U.R., Khanna, R., Le, C.: Rapl: memory power estimation and capping. In: Proceedings of the 16th ACM/IEEE International Symposium on Low Power Electronics and Design, ISLPED 2010, pp. 189–194. ACM, New York (2010).  https://doi.org/10.1145/1840845.1840883
  12. 12.
    Dongarra, J.J., Meuer, H.W., Strohmaier, E., et al.: Top500 supercomputer sites (2019). https://www.top500.org/lists. Accessed 29 Mar 2019
  13. 13.
    Eastep, J., et al.: Global extensible open power manager: a vehicle for HPC community collaboration on co-designed energy management solutions. In: Kunkel, J.M., Yokota, R., Balaji, P., Keyes, D. (eds.) ISC 2017. LNCS, vol. 10266, pp. 394–412. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-58667-0_21CrossRefGoogle Scholar
  14. 14.
    Fraternali, F., Bartolini, A., Cavazzoni, C., Benini, L.: Quantifying the impact of variability and heterogeneity on the energy efficiency for a next-generation ultra-green supercomputer. IEEE Trans. Parallel Distrib. Syst. 29(7), 1575–1588 (2018)CrossRefGoogle Scholar
  15. 15.
    Fraternali, F., Bartolini, A., Cavazzoni, C., Tecchiolli, G., Benini, L.: Quantifying the impact of variability on the energy efficiency for a next-generation ultra-green supercomputer. In: International Symposium on Low Power Electronics and Design, ISLPED 2014, La Jolla, CA, USA, 11–13 August 2014, pp. 295–298 (2014).  https://doi.org/10.1145/2627369.2627659
  16. 16.
    Hackenberg, D., Schöne, R., Ilsche, T., Molka, D., Schuchart, J., Geyer, R.: An energy efficiency feature survey of the intel Haswell processor. In: 2015 IEEE International Parallel and Distributed Processing Symposium Workshop, pp. 896–904, May 2015.  https://doi.org/10.1109/IPDPSW.2015.70
  17. 17.
    Hsu, C., Feng, W.: A power-aware run-time system for high-performance computing. In: Proceedings of the 2005 ACM/IEEE Conference on Supercomputing, SC 2005, p. 1, November 2005.  https://doi.org/10.1109/SC.2005.3
  18. 18.
    Kappiah, N., Freeh, V.W., Lowenthal, D.K.: Just-in-time dynamic voltage scaling: exploiting inter-node slack to save energy in MPI programs. In: Proceedings of the 2005 ACM/IEEE Conference on Supercomputing, SC 2005, p. 33, November 2005.  https://doi.org/10.1109/SC.2005.39
  19. 19.
    Li, D., de Supinski, B.R., Schulz, M., Cameron, K., Nikolopoulos, D.S.: Hybrid MPI/OpenMP power-aware computing. In: 2010 IEEE International Symposium on Parallel Distributed Processing (IPDPS), pp. 1–12, April 2010.  https://doi.org/10.1109/IPDPS.2010.5470463
  20. 20.
    Lim, M.Y., Freeh, V.W., Lowenthal, D.K.: Adaptive, transparent frequency and voltage scaling of communication phases in MPI programs. In: Proceedings of the 2006 ACM/IEEE Conference on Supercomputing, SC 2006, p. 14, November 2006.  https://doi.org/10.1109/SC.2006.11
  21. 21.
    Maiterth, M., et al.: Energy and power aware job scheduling and resource management: global survey–initial analysis. In: 2018 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW), pp. 685–693. IEEE (2018)Google Scholar
  22. 22.
    Rosedahl, T., Broyles, M., Lefurgy, C., Christensen, B., Feng, W.: Power/Performance controlling techniques in OpenPOWER. In: Kunkel, J.M., Yokota, R., Taufer, M., Shalf, J. (eds.) ISC High Performance 2017. LNCS, vol. 10524, pp. 275–289. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-67630-2_21CrossRefGoogle Scholar
  23. 23.
    Rountree, B., Lowenthal, D.K., Funk, S., Freeh, V.W., de Supinski, B.R., Schulz, M.: Bounding energy consumption in large-scale MPI programs. In: Proceedings of the 2007 ACM/IEEE Conference on Supercomputing, SC 2007, pp. 49:1–49:9. ACM, New York (2007).  https://doi.org/10.1145/1362622.1362688
  24. 24.
    Rountree, B., Lownenthal, D.K., de Supinski, B.R., Schulz, M., Freeh, V.W., Bletsch, T.: Adagio: making DVS practical for complex HPC applications. In: Proceedings of the 23rd International Conference on Supercomputing, ICS 2009, pp. 460–469. ACM, New York (2009).  https://doi.org/10.1145/1542275.1542340
  25. 25.
    Venkatesh, A., et al.: A case for application-oblivious energy-efficient MPI runtime. In: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, SC 2015, pp. 1–12, November 2015.  https://doi.org/10.1145/2807591.2807658

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.University of BolognaBolognaItaly
  2. 2.CinecaCasalecchio di RenoItaly

Personalised recommendations