Advertisement

Understanding Metadata Latency with MDWorkbench

  • Julian Martin KunkelEmail author
  • George S. Markomanolis
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11203)

Abstract

While parallel file systems often satisfy the need of applications with bulk synchronous I/O, they lack capabilities of dealing with metadata intense workloads. Typically, in procurements, the focus lies on the aggregated metadata throughput using the MDTest benchmark (https://www.vi4io.org/tools/benchmarks/mdtest). However, metadata performance is crucial for interactive use. Metadata benchmarks involve even more parameters compared to I/O benchmarks. There are several aspects that are currently uncovered and, therefore, not in the focus of vendors to investigate. Particularly, response latency and interactive workloads operating on a working set of data. The lack of capabilities from file systems can be observed when looking at the IO-500 list, where metadata performance between best and worst system does not differ significantly.

In this paper, we introduce a new benchmark called MDWorkbench which generates a reproducible workload emulating many concurrent users or – in an alternative view – queuing systems. This benchmark provides a detailed latency profile, overcomes caching issues, and provides a method to assess the quality of the observed throughput. We evaluate the benchmark on state-of-the-art parallel file systems with GPFS (IBM Spectrum Scale), Lustre, Cray’s Datawarp, and DDN IME, and conclude that we can reveal characteristics that could not be identified before.

Notes

Acknowledgements

Thanks for DDN providing access to their facility and the discussion with Jean-Thomas Acquaviva and Jay Lofstead. This research used resources of the KAUST Supercomputing Core Laboratory, of the Argonne Leadership Computing Facility and NERSC, which are under DOE Office of Science User Facilities supported under Contract DE-AC02-06CH11357 and DE-AC02-05CH11231 respectively.

References

  1. 1.
    Alam, S.R., El-Harake, H.N., Howard, K., Stringfellow, N., Verzelloni, F.: Parallel I/O and the metadata wall. In: Proceedings of the Sixth Workshop on Parallel Data Storage, pp. 13–18. ACM (2011)Google Scholar
  2. 2.
    Carns, P., Lang, S., Ross, R., Vilayannur, M., Kunkel, J., Ludwig, T.: Small-file access in parallel file systems. In: Proceedings of the 2009 IEEE International Symposium on Parallel and Distributed Processing, pp. 1–11 (2009)Google Scholar
  3. 3.
    Friedrich, S., et al.: NoSQL OLTP benchmarking: a survey. In: GI-Jahrestagung, pp. 693–704 (2014)Google Scholar
  4. 4.
    Hadri, B., Kortas, S., Feki, S., Khurram, R., Newby, G.: Overview of the KAUST’s cray X40 system-Shaheen II. In: Proceeding of Cray User Group (2015)Google Scholar
  5. 5.
    Huppler, K.: The art of building a good benchmark. In: Nambiar, R., Poess, M. (eds.) TPCTC 2009. LNCS, vol. 5895, pp. 18–30. Springer, Heidelberg (2009).  https://doi.org/10.1007/978-3-642-10424-4_3CrossRefGoogle Scholar
  6. 6.
    Katcher, J.: PostMark: a new file system benchmark. Technical report, TR3022, NetApp (1997)Google Scholar
  7. 7.
    Méndez, S., Rexachs, D., Luque, E.: Methodology for performance evaluation of the input/output system on computer clusters. In: 2011 IEEE International Conference on Cluster Computing (CLUSTER), pp. 474–483 (2011)Google Scholar
  8. 8.
    Storage Performance Council: SPC BENCHMARK 1 (SPC-1) - Rev. 3.5, September 2017Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Julian Martin Kunkel
    • 1
    Email author
  • George S. Markomanolis
    • 2
  1. 1.University of ReadingReadingUK
  2. 2.KAUST Supercomputing LaboratoryThuwalSaudi Arabia

Personalised recommendations