Advertisement

Automatic Halo Management for the Uintah GPU-Heterogeneous Asynchronous Many-Task Runtime

  • Brad Peterson
  • Alan Humphrey
  • Dan Sunderland
  • James Sutherland
  • Tony Saad
  • Harish Dasari
  • Martin Berzins
Article
  • 12 Downloads

Abstract

The Uintah computational framework is used for the parallel solution of partial differential equations on adaptive mesh refinement grids using modern supercomputers. Uintah is structured with an application layer and a separate runtime system. Uintah is based on a distributed directed acyclic graph of computational tasks, with a task scheduler that efficiently schedules and executes these tasks on both CPU cores and on-node accelerators. The runtime system identifies task dependencies, creates a task graph prior to the execution of these tasks, automatically generates MPI message tags, and automatically performs halo transfers for simulation variables. Automating halo transfers in a heterogeneous environment poses significant challenges when tasks compute within a few milliseconds, as runtime overhead affects wall time execution, or when simulation variables require large halos spanning most or all of the computational domain, as task dependencies become expensive to process. These challenges are magnified at production scale when application developers require each compute node perform thousands of different halo transfers among thousands simulation variables. The principal contribution of this work is to (1) identify and address inefficiencies that arise when mapping tasks onto the GPU in the presence of automated halo transfers, (2) implement new schemes to reduce runtime system overhead, (3) minimize application developer involvement with the runtime, and (4) show overhead reduction results from these improvements.

Keywords

Uintah Hybrid parallelism Parallel GPU Heterogeneous systems Stencil computation Optimization Concurrency Halo transfer 

Notes

Acknowledgements

Funding from NSF and DOE is gratefully acknowledged. This material is based upon work supported by the National Science Foundation under Grant No. 1337145. This material is based upon work supported by the Department of Energy, National Nuclear Security Administration, under Award Number(s) DE-NA0002375. This research used resources of the Oak Ridge Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC05-00OR22725. We would also like to acknowledge Oak Ridge Leadership Computing Facility ALCC award CSC188, “Demonstration of the Scalability of Programming Environments By Simulating Multi-Scale Applications” for time on Titan. We would also like to thank all those involved with Uintah past and present.

References

  1. 1.
    Scientific Computing and Imaging Institute. Uintah Web Page (2015). http://www.uintah.utah.edu/
  2. 2.
    Humphrey, A., Meng, Q., Berzins, M., Harman, T.: Radiation modeling using the uintah heterogeneous CPU/GPU runtime system. In: Proceedings of the 1st Conference of the Extreme Science and Engineering Discovery Environment (XSEDE 2012). ACM (2012)Google Scholar
  3. 3.
    Peterson, B., Dasari, H., Humphrey, A., Sutherland, J., Saad, T., Berzins, M.: Reducing overhead in the uintah framework to support short-lived tasks on GPU-heterogeneous architectures. In: Proceedings of the 5th International Workshop on Domain-Specific Languages and High-Level Frameworks for High Performance Computing, WOLFHPC ’15, pp. 4:1–4:8. ACM, New York (2015)Google Scholar
  4. 4.
    Meng, Q., Humphrey, A., Berzins, M.: The Uintah framework: a unified heterogeneous task scheduling and runtime system. In: Digital Proceedings of Supercomputing 12—WOLFHPC Workshop. IEEE (2012)Google Scholar
  5. 5.
    Berzins, M.: Status of Release of the Uintah Computational Framework. Technical report UUSCI-2012-001. Scientific Computing and Imaging Institute (2012)Google Scholar
  6. 6.
    Kashiwa, B.A., Gaffney, E.S.: Design Basis for CFDLIB. Technical report LA-UR-03-1295. Los Alamos National Laboratory (2003)Google Scholar
  7. 7.
    Bardenhagen, S.G., Guilkey, J.E., Roessig, K.M., Brackbill, J.U., Witzel, W.M., Foster, J.C.: An improved contact algorithm for the material point method and application to stress propagation in granular material. Comput. Model. Eng. Sci. 2, 509–522 (2001)zbMATHGoogle Scholar
  8. 8.
    Guilkey, J.E., Harman, T.B., Xia, A., Kashiwa, B.A., McMurtry, P.A.: An Eulerian-Lagrangian approach for large deformation fluid-structure interaction problems, part 1: algorithm development. In: Chakrabarti, S.K., Brebbia, C.A., Almorza, D., Gonzalez-Palma, R. (eds.) Fluid Structure Interaction II. WIT Press, Cadiz (2003)Google Scholar
  9. 9.
    Spinti, J., Thornock, J., Eddings, E., Smith, P.J., Sarofim, A.: Heat transfer to objects in pool fires. In: Faghri, M., Sundén, B. (eds.) Transport Phenomena in Fires. WIT Press, Southampton (2008)Google Scholar
  10. 10.
    Saad, T., Sutherland, J.C.: Wasatch: an architecture-proof multiphysics development environment using a domain specific language and graph theory. J. Comput. Sci. 17, 639–646 (2016)CrossRefGoogle Scholar
  11. 11.
    Meng, Q., Berzins, M., Schmidt, J.: Using hybrid parallelism to improve memory use in the uintah framework. In: Proceedings of the 2011 TeraGrid Conference (TG11), Salt Lake City, Utah (2011)Google Scholar
  12. 12.
    Meng, Q., Humphrey, A., Schmidt, J., Berzins, M.: Investigating applications portability with the Uintah DAG-based runtime system on PetaScale supercomputers. In: Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis, SC ’13, pp. 96:1–96:12. ACM, New York (2013)Google Scholar
  13. 13.
    Peterson, B., Humphrey, A., Schmidt, J., Berzins, M.: Addressing global data dependencies in heterogeneous asynchronous runtime systems on GPUs. In: Submitted—Third International Workshop on Extreme Scale Programming Models and Middleware, ESPM2. IEEE Press (2017)Google Scholar
  14. 14.
    Bauer, M., Treichler, Sean, S., Elliott, A., Alex: Legion: expressing locality and independence with logical regions. In: Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis, SC ’12, pp. 66:1–66:11. IEEE Computer Society Press, Los Alamitos (2012)Google Scholar
  15. 15.
    Kale, L.V., Krishnan, S.: CHARM++: a portable concurrent object oriented system based on C++. SIGPLAN Not. 28(10), 91–108 (1993)CrossRefGoogle Scholar
  16. 16.
    Augonnet, Cédric, Thibault, Samuel, Namyst, Raymond, Wacrenier, Pierre-André: StarPU: a unified platform for task scheduling on heterogeneous multicore architectures. Concurr. Comput. Pract. Exp. 23(2), 187–198 (2011)CrossRefGoogle Scholar
  17. 17.
    Bosilca, George, Bouteiller, Aurelien, Danalis, Anthony, Herault, Thomas, Lemarinier, Pierre, Dongarra, Jack: DAGuE: A Generic Distributed DAG Engine for High Performance Computing. Parallel Comput. 38(1–2), 37–51 (2012)CrossRefGoogle Scholar
  18. 18.
    Humphrey, A., Sunderland, D., Harman, T., Berzins, M.: Radiative heat transfer calculation on 16384 GPUs using a reverse Monte Carlo ray tracing approach with adaptive mesh refinement. In: 2016 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW), pp. 1222–1231 (2016)Google Scholar
  19. 19.
    Berzins, M., Beckvermit, J., Harman, T., Bezdjian, A., Humphrey, A., Meng, Q., Schmidt, J., Wight, C.: Extending the Uintah framework through the petascale modeling of detonation in arrays of high explosive devices. SIAM J. Sci. Comput. 38(5), S101–S122 (2016)MathSciNetCrossRefGoogle Scholar
  20. 20.
    Bourd, A.: The OpenCL Specification (2017). https://www.khronos.org/registry/OpenCL/specs/opencl-2.2.pdf
  21. 21.
    OpenACC member companies and CAPS Enterprise and CRAY Inc and The Portland Group Inc (PGI) and NVIDIA. OpenACC 2.5 Specification (2015). https://www.openacc.org/specification
  22. 22.
    OpenMP Architecture Review Board. Openmp application program interface version 4.0 (2013)Google Scholar
  23. 23.
    Keasler, J., Hornung, R.: The RAJA Portability Layer: Overview and Status. Technical report LLNL-TR-661403, Lawrence Livermore National Laboratory (2014)Google Scholar
  24. 24.
    Edwards, H.C., Sunderland, D.: Kokkos array performance-portable manycore programming model. In: Proceedings of the 2012 International Workshop on Programming Models and Applications for Multicores and Manycores, PMAM ’12, pp. 1–10. ACM, New York (2012)Google Scholar
  25. 25.
    Srman, T.: Comparison of Technologies for General-Purpose Computing on Graphics Processing Units. Master’s thesis, Department of Electrical Engineering, Linkping University (2016)Google Scholar
  26. 26.
    Martineau, M., Price, J., McIntosh-Smith, S., Gaudin, W.: Pragmatic performance portability with OpenMP 4.x. In: Maruyama, N., de Supinski, B.R., Wahib, M. (eds.) OpenMP: Memory, Devices, and Tasks: 12th International Workshop on OpenMP, IWOMP 2016, Nara, Japan, October 5–7, 2016, Proceedings, pp. 253–267. Springer, Cham (2016)Google Scholar
  27. 27.
    Landaverde, R., Zhang, T., Coskun, A.K., Herbordt, M.C.: An investigation of unified memory access performance in CUDA. In: 2014 IEEE High Performance Extreme Computing Conference (HPEC), pp. 1–6 (2014)Google Scholar
  28. 28.
    Nvidia. CUDA C Programming Guide v8.0 Web page—J. Unified Memory Programming (2017). http://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#um-unified-memory-programming-hd
  29. 29.
    Notz, P.K., Pawlowski, R.P., Sutherland, J.C.: Graph-based software design for managing complexity and enabling concurrency in multiphysics PDE software. ACM Trans. Math. Softw. TOMS 39(1), 1 (2012)MathSciNetCrossRefGoogle Scholar
  30. 30.
    Earl, C., Might, M., Bagusetty, A., Sutherland, J.C.: Nebo: an efficient, parallel, and portable domain-specific language for numerically solving partial differential equations. J. Syst. Softw. 125, 389–400 (2017)CrossRefGoogle Scholar
  31. 31.
    Sutherland, J.C., Saad, T.: The discrete operator approach to the numerical solution of partial differential equations. In: 20th AIAA Computational Fluid Dynamics Conference, pp. AIAA–2011–3377, Honolulu, Hawaii, USA (2011)Google Scholar
  32. 32.
    Nvidia. Nvlink web page (2015). http://www.nvidia.com/object/nvlink.html
  33. 33.
    Wu, W., Bosilca, G., vandeVaart, R., Jeaugey, S., Dongarra, J.: GPU-aware non-contiguous data movement in open MPI. In: Proceedings of the 25th ACM International Symposium on High-Performance Parallel and Distributed Computing, HPDC ’16, pp. 231–242. ACM, New York (2016)Google Scholar
  34. 34.
    Ren, B., Ravi, N., Yang, Y., Feng, M., Agrawal, G., Chakradhar, S.: Automatic and efficient data host-device communication for many-core coprocessors. In: Revised Selected Papers of the 28th International Workshop on Languages and Compilers for Parallel Computing—LCPC 2015, vol. 9519, p. 173–190. Springer, New York (2016)Google Scholar
  35. 35.
    Humphrey, A., Harman, T., Berzins, M., Smith, P.: A scalable algorithm for radiative heat transfer using reverse Monte Carlo ray tracing. In: Kunkel, J.M., Ludwig, T. (eds.) High Performance Computing, Volume 9137 of Lecture Notes in Computer Science, pp. 212–230. Springer, New York (2015)Google Scholar
  36. 36.
    Burns, S.P., Christen, M.A.: Spatial domain-based parallelism in large-scale, participating-media, radiative transport applications. Numer. Heat Transf. B Fundam. 31(4), 401–421 (1997)CrossRefGoogle Scholar
  37. 37.
    Slaughter, E., Lee, W., Treichler, S., Bauer, M., Aiken, A.: Regent: a high-productivity programming language for HPC with logical regions. In: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, SC ’15, pp. 81:1–81:12. ACM, New York (2015)Google Scholar
  38. 38.
    Bosilca, G., Bouteiller, A., Hérault, T., Lemarinier, P., Saengpatsa, N.O., Tomov, S., Dongarra, J.J.: Performance portability of a GPU enabled factorization with the DAGuE framework. In: 2011 IEEE International Conference on Cluster Computing, pp. 395–402 (2011)Google Scholar
  39. 39.
    Bauer, M.E.: Legion: programming distributed heterogeneous architectures with logical regions. Ph.D. thesis, Stanford University (2014)Google Scholar
  40. 40.
    Bhatele, A., Yeom, J.-S., Jain, N., Kuhlman, C.J., Livnat, Y., Bisset, K.R., Kale, L.V., Marathe, M.V.: Massively parallel simulations of spread of infectious diseases over realistic social networks. In: Proceedings of the 17th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing, CCGrid ’17, pp. 689–694. IEEE Press, Piscataway (2017)Google Scholar
  41. 41.
    Agullo, E., Aumage, O., Faverge, M., Furmento, N., Pruvost, F., Sergent, M., Thibault, S.: Achieving high performance on supercomputers with a sequential task-based programming model. In: [Research Report] RR-8927, Inria Bordeaux Sud-Ouest, Bordeaux INP, CNRS, Université de Bordeaux, CEA, p. 27 (2016)Google Scholar
  42. 42.
    Danalis, A., Bosilca, G., Bouteiller, A., Herault, T., Dongarra, J.: PTG: an abstraction for unhindered parallelism. In: Proceedings of the Fourth International Workshop on Domain-Specific Languages and High-Level Frameworks for High Performance Computing, WOLFHPC ’14, pp. 21–30. IEEE Press, Piscataway (2014)Google Scholar

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2018

Authors and Affiliations

  • Brad Peterson
    • 1
  • Alan Humphrey
    • 1
  • Dan Sunderland
    • 3
  • James Sutherland
    • 2
  • Tony Saad
    • 2
  • Harish Dasari
    • 1
  • Martin Berzins
    • 1
  1. 1.Scientific Computing and Imaging InstituteUniversity of UtahSalt Lake CityUSA
  2. 2.Department of Chemical EngineeringUniversity of UtahSalt Lake CityUSA
  3. 3.Sandia National LaboratoriesAlbuquerqueUSA

Personalised recommendations