A Slurm Simulator: Implementation and Parametric Analysis

  • Nikolay A. SimakovEmail author
  • Martins D. Innus
  • Matthew D. Jones
  • Robert L. DeLeon
  • Joseph P. White
  • Steven M. Gallo
  • Abani K. Patra
  • Thomas R. Furlani
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10724)


Slurm is an open-source resource manager for HPC that provides high configurability for inhomogeneous resources and job scheduling. Various Slurm parametric settings can significantly influence HPC resource utilization and job wait time, however in many cases it is hard to judge how these options will affect the overall HPC resource performance. The Slurm simulator can be a very helpful tool to aid parameter selection for a particular HPC resource. Here, we report our implementation of a Slurm simulator and the impact of parameter choice on HPC resource performance. The simulator is based on a real Slurm instance with modifications to allow simulation of historical jobs and to improve the simulation speed. The simulator speed heavily depends on job composition, HPC resource size and Slurm configuration. For an 8000 cores heterogeneous cluster, we achieve about 100 times acceleration, e.g. 20 days can be simulated in 5 h. Several parameters affecting job placement were studied. Disabling node sharing on our 8000 core cluster showed a 45% increase in the time needed to complete the same workload. For a large system (>6000 nodes) comprised of two distinct sub-clusters, two separate Slurm controllers and adding node sharing can cut waiting times nearly in half.


HPC SLURM Batch jobs scheduler Simulator 



This work was supported by the National Science Foundation under awards OCI 1025159, 1203560, and is currently supported by award ACI 1445806 for the XD metrics service for high performance computing systems.

Supplementary material

462187_1_En_10_MOESM1_ESM.pdf (108 kb)
Supplementary material 1 (PDF 107 kb)


  1. 1.
    Balle, S.M., Palermo, D.J.: Enhancing an open source resource manager with multi-core/multi-threaded support. In: Frachtenberg, E., Schwiegelshohn, U. (eds.) JSSPP 2007. LNCS, vol. 4942, pp. 37–50. Springer, Heidelberg (2008). CrossRefGoogle Scholar
  2. 2.
    Breslow, A.D., Porter, L., Tiwari, A., Laurenzano, M., Carrington, L., Tullsen, D.M., Snavely, A.E.: The case for colocation of high performance computing workloads. Concurrency Comput. Pract. Experience 28(2), 232–251 (2016)CrossRefGoogle Scholar
  3. 3.
    Caniou, Y., Gay, J.-S.: Simbatch: an API for simulating and predicting the performance of parallel resources managed by batch systems. In: César, E., Alexander, M., Streit, A., Träff, J.L., Cérin, C., Knüpfer, A., Kranzlmüller, D., Jha, S. (eds.) Euro-Par 2008. LNCS, vol. 5415, pp. 223–234. Springer, Heidelberg (2009). CrossRefGoogle Scholar
  4. 4.
    Casanova, H., Giersch, A., Legrand, A., Quinson, M., Suter, F.: Versatile, scalable, and accurate simulation of distributed applications and platforms. J. Parallel Distrib. Comput. 74(10), 2899–2917 (2014)CrossRefGoogle Scholar
  5. 5.
    Evans, T., Barth, W.L., Browne, J.C., DeLeon, R.L., Furlani, T.R., Gallo, S.M., Jones, M.D., Patra, A.K.: Comprehensive resource use monitoring for HPC systems with TACC stats. In: 2014 First International Workshop on HPC User Support Tools, pp. 13–21, November 2014Google Scholar
  6. 6.
    Jackson, D.B., Jackson, H.L., Snell, Q.O.: Simulation based HPC workload analysis. In: Proceedings 15th International Parallel and Distributed Processing Symposium, IPDPS 2001, 8 p. (2001)Google Scholar
  7. 7.
    Klusácek, D., Rudová, H.: Alea 2: job scheduling simulator. In: Proceedings of the 3rd International ICST Conference on Simulation Tools and Techniques, p. 61 (2010)Google Scholar
  8. 8.
    Legrand, A., Marchal, L., Casanova, H.: Scheduling distributed applications: the SimGrid simulation framework. In: Proceedings of the 3rd International Symposium on Cluster Computing and the Grid, Washington, DC, USA, pp. 138–145 (2003)Google Scholar
  9. 9.
    Lucero, A.: Slurm Simulator. In: Slurm User Group Meeting (2011)Google Scholar
  10. 10.
  11. 11.
  12. 12.
    Palmer, J.T., et al.: Open XDMoD: a tool for the comprehensive management of high-performance computing resources. Comput. Sci. Eng. 17(4), 52–62 (2015)MathSciNetCrossRefGoogle Scholar
  13. 13.
    Simakov, N.A., Sperhac, J., Yearke, T., Rathsam, R., Palmer, J.T., DeLeon, R.L., White, J.P., Furlani, T.R., Innus, M., Gallo, S.M., Jones, M.D., Patra, A., Plessinger, B.D.: A quantitative analysis of node sharing on HPC clusters using XDMoD application kernels. In: Proceedings of the XSEDE16 on Diversity, Big Data, and Science at Scale - XSEDE16, New York, NY, USA, pp. 1–8 (2016)Google Scholar
  14. 14.
    Slurm Workload Manager. Accessed 03 Apr 2017
  15. 15.
    Takefusa, A., Matsuoka, S., Aida, K., Nakada, H., Nagashima, U.: In: Proceedings of the 8th IEEE International Symposium on High-Performance Distributed Computing, August 3–6, 1999. IEEE Computer Society (1999)Google Scholar
  16. 16.
    Trofinoff, S., Benini, M.: Using and Modifying the BSC Slurm Workload Simulator. In: Slurm User Group Meeting (2015)Google Scholar
  17. 17.
    Yoo, A.B., Jette, M.A., Grondona, M.: SLURM: simple Linux utility for resource management. In: Feitelson, D., Rudolph, L., Schwiegelshohn, U. (eds.) JSSPP 2003. LNCS, vol. 2862, pp. 44–60. Springer, Heidelberg (2003). CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2018

Authors and Affiliations

  • Nikolay A. Simakov
    • 1
    Email author
  • Martins D. Innus
    • 1
  • Matthew D. Jones
    • 1
  • Robert L. DeLeon
    • 1
  • Joseph P. White
    • 1
  • Steven M. Gallo
    • 1
  • Abani K. Patra
    • 1
  • Thomas R. Furlani
    • 1
  1. 1.Center for Computational Research, State University of New YorkUniversity at BuffaloBuffaloUSA

Personalised recommendations