Abstract
Argo is an ongoing project improving Linux for exascale machines. Targeting emerging production workloads such as workflows and coupled codes, we focus on providing missing features and building new resource management facilities. Our work is unified into compute containers, a containerization approach aimed at providing modern HPC applications with dynamic control over a wide range of kernel interfaces.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
Because of pixel resolution, threads seem to be in kernel mode for long intervals. However, zooming in on the trace reveals many independent, short kernel activities, such as timer interrupts, that appear as one block in the trace.
References
Appc: App container specification and tooling (2017). https://github.com/appc/spec.
Ahn, D. H., Garlick, J., Grondona, M., Lipari, D., Springmeyer, B., & Schulz, M. (2014). Flux: A next-generation resource management framework for large HPC centers. In 2014 43rd International Conference on Parallel Processing Workshops (ICCPW) (pp. 9–17). IEEE.
Bautista-Gomez, L., Gainaru, A., Perarnau, S., Tiwari, D., Gupta, S., Cappello, F., et al. (2016). Reducing waste in large scale systems through introspective analysis. In IEEE International Parallel and Distributed Processing Symposium (IPDPS).
Beserra, D., Moreno, E. D., Endo, P. T., Barreto, J., Sadok, D., & Fernandes, S. (2015). Performance analysis of LXC for HPC environments. In International Conference on Complex, Intelligent, and Software Intensive Systems (CISIS).
Dongarra, J., Beckman, P., et al. (2011). The international exascale software project roadmap. International Journal of High Performance Computing Applications.
Dreher, M., & Raffin, B. (2014). A flexible framework for asynchronous in situ and in transit analytics for scientific simulations. In IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CLUSTER).
Ellsworth, D., Patki, T., Perarnau, S., Seo, S., Amer, A., Zounmevo, J., et al. (2016). Systemwide power management with Argo. In High-Performance, Power-Aware Computing (HPPAC).
Gioiosa, R., Petrini, F., Davis, K., & Lebaillif-Delamare, F. (2004). Analysis of system overhead on parallel computers. In IEEE International Symposium on Signal Processing and Information Technology (ISSPIT).
Intel. Running average power limit – RAPL. https://01.org/blogs/2014/running-average-power-limit---rapl.
Jacobsen, D. M., & Canon, R. S. (2015). Contain this, unleashing Docker for HPC. In Proceedings of the Cray User Group.
Jiang, M., Van Essen, B., Harrison, C., & Gokhale, M. (2014). Multi-threaded streamline tracing for data-intensive architectures. In IEEE Symposium on Large Data Analysis and Visualization (LDAV).
Kernel.org (2004). Linux control groups. https://www.kernel.org/doc/Documentation/cgroup-v1/cgroups.txt.
Krone, M., Stone, J. E., Ertl, T., & Schulten, K. (2012). Fast visualization of Gaussian density surfaces for molecular dynamics and particle system trajectories. In EuroVis Short Papers.
Merkel, D. (2014). Docker: Lightweight Linux containers for consistent development and deployment. Linux J., 2014(239).
Morari, A., Gioiosa, R., Wisniewski, R., Cazorla, F., & Valero, M. (2011). A quantitative analysis of OS noise. In 2011 IEEE International, Parallel Distributed Processing Symposium (IPDPS) (pp. 852–863).
Morari, A., Gioiosa, R., Wisniewski, R., Rosenburg, B., Inglett, T., & Valero, M. (2012). Evaluating the impact of TLB misses on future HPC systems. In 2012 IEEE 26th International, Parallel Distributed Processing Symposium (IPDPS) (pp. 1010–1021).
Perarnau, S., Thakur, R., Iskra, K., Raffenetti, K., Cappello, F., Gupta, R., et al. (2015). Distributed monitoring and management of exascale systems in the Argo project. In IFIP International Conference on Distributed Applications and Interoperable Systems (DAIS), Short Paper.
Perarnau, S., Zounmevo, J. A., Dreher, M., Van Essen, B. C., Gioiosa, R., Iskra, K., et al. (2017). Argo NodeOS: Toward unified resource management for exascale. In IEEE International Parallel and Distributed Processing Symposium (IPDPS).
Pronk, S., Pall, S., Schulz, R., Larsson, P., et al. (2013). GROMACS 4.5: A high-throughput and highly parallel open source molecular simulation toolkit. Bioinformatics.
Rostedt, S. (2009). Finding origins of latencies using ftrace. In Real Time Linux Workshop (RTLWS).
Seo, S., Amer, A., & Balaji, P. (2018). BOLT is OpenMP over lightweight threads. http://www.bolt-omp.org/.
Seo, S., Amer, A., Balaji, P., Bordage, C., Bosilca, G., Brooks, A., et al. (2017). Argobots: A lightweight low-level threading and tasking framework. IEEE Transactions on Parallel and Distributed Systems, PP(99), 1–1.
Van Essen, B., Hsieh, H., Ames, S., Pearce, R., & Gokhale, M. (2015). DI-MMAP: A scalable memory map runtime for out-of-core data-intensive applications. Cluster Computing, 18, 15.
Wheeler, K. B., Murphy, R. C., & Thain, D. (2008). Qthreads: An API for programming with millions of lightweight threads. In 2008 IEEE International Symposium on Parallel and Distributed Processing (pp. 1–8).
Xavier, M. G., Neves, M. V., Rossi, F. D., Ferreto, T. C., Lange, T., & De Rose, C. A. F. (2013). Performance evaluation of container-based virtualization for high performance computing environments. In Euromicro International Conference on Parallel, Distributed and Network-Based Processing (PDP).
Acknowledgements
Results presented in this chapter were obtained using the Chameleon testbed supported by the National Science Foundation. Argonne National Laboratory’s work was supported by the U.S. Department of Energy, Office of Science, Advanced Scientific Computer Research, under Contract DE-AC02-06CH11357. Part of this work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under contract No. DE-AC52-07NA27344. This research was supported by the Exascale Computing Project (17-SC-20-SC), a collaborative effort of the U.S. Department of Energy Office of Science and the National Nuclear Security Administration.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Singapore Pte Ltd.
About this chapter
Cite this chapter
Perarnau, S. et al. (2019). Argo. In: Gerofi, B., Ishikawa, Y., Riesen, R., Wisniewski, R.W. (eds) Operating Systems for Supercomputers and High Performance Computing. High-Performance Computing Series, vol 1. Springer, Singapore. https://doi.org/10.1007/978-981-13-6624-6_12
Download citation
DOI: https://doi.org/10.1007/978-981-13-6624-6_12
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-13-6623-9
Online ISBN: 978-981-13-6624-6
eBook Packages: Computer ScienceComputer Science (R0)