Benefit of DDN’s IME-FUSE for I/O Intensive HPC Applications

  • Eugen BetkeEmail author
  • Julian KunkelEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11203)


Many scientific applications are limited by I/O performance offered by parallel file systems on conventional storage systems. Flash-based burst buffers provide significant better performance than HDD backed storage, but at the expense of capacity. Burst buffers are considered as the next step towards achieving wire-speed of interconnect and providing more predictable low latency I/O, which are the holy grail of storage.

A critical evaluation of storage technology is mandatory as there is no long-term experience with performance behavior for particular applications scenarios. The evaluation enables data centers choosing the right products and system architects the integration in HPC architectures.

This paper investigates the native performance of DDN-IME, a flash-based burst buffer solution. Then, it takes a closer look at the IME-FUSE file systems, which uses IMEs as burst buffer and a Lustre file system as back-end. Finally, by utilizing a NetCDF benchmark, it estimates the performance benefit for climate applications.


Lustre FUSE Evaluation Flash-based storage 



Thanks to DDN for providing access to the IME test cluster and to Jean-Thomas Acquaviva for the support.


  1. 1.
    Cray: CRAY XC40 DataWarp Applications I/O Accelerator.
  2. 2.
    DDN: Worlds’s most advanced application aware I/O acceleration solutions.
  3. 3.
    Hebenstreit, M.: Performance evaluation of Intel SSD-based Lustre cluster file systems at the Intel CRT-DC. Technical report, Intel (2014).
  4. 4.
    Howison, M., Koziol, Q., Knaak, D., Mainzer, J., Shalf, J.: Tuning HDF5 for lustre file systems. In: Workshop on Interfaces and Abstractions for Scientific Data Storage (IASDS 2010), Heraklion, Crete, Greece, 24 September 2010 (2012)Google Scholar
  5. 5.
  6. 6.
    Jose, J., et al.: Memcached design on high performance RDMA capable interconnects. In: 2011 International Conference on Parallel Processing, pp. 743–752. IEEE (2011)Google Scholar
  7. 7.
  8. 8.
    Liu, N., et al.: On the role of burst buffers in leadership-class storage systems. In: Proceedings of the 2012 IEEE Conference on Massive Data Storage (2012)Google Scholar
  9. 9.
    Loewe, W., McLarty, T., Morrone, C.: IOR benchmark (2012)Google Scholar
  10. 10.
    Ovsyannikov, A., Romanus, M., Straalen, B.V., Weber, G.H., Trebotich, D.: Scientific workflows at datawarp-speed: accelerated data-intensive science using NERSC’s burst buffer (2016).
  11. 11.
    Romanus, M., Parashar, M., Ross, R.B.: Challenges and considerations for utilizing burst buffers in high-performance computing. arXiv preprint arXiv:1509.05492 (2015)
  12. 12.
    Schenck, W., El Sayed, S., Foszczynski, M., Homberg, W., Pleiter, D.: Early evaluation of the “Infinite memory engine” burst buffer solution. In: Taufer, M., Mohr, B., Kunkel, J.M. (eds.) ISC High Performance 2016. LNCS, vol. 9945, pp. 604–615. Springer, Cham (2016). Scholar
  13. 13.
    Storage, D.: Burst buffer & beyond; I/O & Application Acceleration Technology. DDN Storage, September 2015Google Scholar
  14. 14.
    Thakur, R., Gropp, W., Lusk, E.: On Implementing MPI-IO Portably and with High Performance. In: Proceedings of the Sixth Workshop on I/O in Parallel and Distributed Systems, pp. 23–32 (1999)Google Scholar
  15. 15.
    The HDF Group: A Brief Introduction to Parallel HDF5.
  16. 16.
    Wang, T., Mohror, K., Moody, A., Sato, K., Yu, W.: An ephemeral burst-buffer file system for scientific applications. In: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, SC 2016, pp. 69:1–69:12. IEEE Press, Piscataway (2016).
  17. 17.
    Wang, T., Oral, S., Wang, Y., Settlemyer, B., Atchley, S., Yu, W.: BurstMem: a high-performance burst buffer system for scientific applications. In: 2014 IEEE International Conference on Big Data (Big Data), pp. 71–79. IEEE (2014)Google Scholar
  18. 18.
    Wickberg, T., Carothers, C.: The RAMDISK storage accelerator: a method of accelerating I/O performance on HPC systems using RAMDISKs. In: Proceedings of the 2nd International Workshop on Runtime and Operating Systems for Supercomputers, p. 5. ACM (2012)Google Scholar
  19. 19.
    Zaharia, M., et al.: Resilient distributed datasets: a fault-tolerant abstraction for in-memory cluster computing. In: Proceedings of the 9th USENIX Conference on Networked Systems Design and Implementation, p. 2. USENIX Association (2012)Google Scholar
  20. 20.
    Zhang, H., Chen, G., Ooi, B.C., Tan, K.L., Zhang, M.: In-memory big data management and processing: a survey. IEEE Trans. Knowl. Data Eng. 27(7), 1920–1948 (2015)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.Deutsches KlimarechenzentrumHamburgGermany
  2. 2.University of ReadingReadingUK

Personalised recommendations