Advertisement

Performance Study of Non-volatile Memories on a High-End Supercomputer

  • Leonardo Bautista GomezEmail author
  • Kai Keller
  • Osman Unsal
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11203)

Abstract

The first exa-scale supercomputers are expected to be operational in China, USA, Japan and Europe within the early 2020’s. This allows scientists to execute applications at extreme scale with more than \(10^{18}\) floating point operations per second (exa-FLOPS). However, the number of FLOPS is not the only parameter that determines the final performance. In order to store intermediate results or to provide fault tolerance, most applications need to perform a considerable amount of I/O operations during runtime. The performance of those operations is determined by the throughput from volatile (e.g. DRAM) to non-volatile stable storage. Regarding the slow growth in network bandwidth compared to the computing capacity on the nodes, it is highly beneficial to deploy local stable storage such as the new non-volatile memories (NVMe), in order to avoid the transfer through the network to the parallel file system. In this work, we analyse the performance of three different storage levels of the CTE-POWER9 cluster, located at the Barcelona Supercomputing Center (BSC). We compare the throughputs of SSD, NVMe on the nodes to the GPFS under various scenarios and settings. We measured a maximum performance on 16 nodes of 83 GB/s using NVMe devices, 5.6 GB/s for SSD devices and 4.4 GB/s for writes to the GPFS.

Notes

Acknowledgements

This project has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 708566 (DURO). Part of the research presented here has received funding from the European Union’s Seventh Framework Programme (FP7/2007-2013) and the Horizon 2020 (H2020) funding framework under grant agreement no. H2020-FETHPC-754304 (DEEP-EST). The present publication reflects only the authors’ views. The European Commission is not liable for any use that might be made of the information contained therein.

References

  1. 1.
    IBM Elastic Storage Server Overview and Datasheet. https://www.ibm.com/us-en/marketplace/ibm-elastic-storage-server. Accessed 06 June 2018
  2. 2.
  3. 3.
  4. 4.
    Gunasekaran, R., Oral, S., Hill, J., Miller, R., Wang, F., Leverman, D.: Comparative I/O workload characterization of two leadership class storage clusters. In: Proceedings of the 10th Parallel Data Storage Workshop, PDSW 2015, pp. 31–36. ACM, New York (2015).  https://doi.org/10.1145/2834976.2834985
  5. 5.
    Lawrence Livermore National Laboratory (LLNL), Loewe, W., McLarty, T., Morrone, C.: IOR - parallel filesystem I/O benchmark (2018). https://github.com/hpc/ior
  6. 6.
    Mittal, S., Vetter, J.S.: A survey of software techniques for using non-volatile memories for storage and main memory systems. IEEE Trans. Parallel Distrib. Syst. 27(5), 1537–1550 (2016)CrossRefGoogle Scholar
  7. 7.
    Vetter, J.S., Mittal, S.: Opportunities for nonvolatile memory systems in extreme-scale high-performance computing. Comput. Sci. Eng. 17(2), 73–82 (2015)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Leonardo Bautista Gomez
    • 1
    Email author
  • Kai Keller
    • 1
  • Osman Unsal
    • 1
  1. 1.Barcelona Supercomputing Center (BSC-CNS)BarcelonaSpain

Personalised recommendations