Advertisement

Self-optimization Strategy for IO Accelerator Parameterization

  • Lionel Vincent
  • Mamady Nabe
  • Gaël GoretEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11203)

Abstract

Exascale reaching imposes a high automation level on HPC supercomputers. In this paper, a self-optimization strategy is proposed to improve application IO performance using statistical and machine learning based methods.

The proposed method takes advantage of collected IO data through an off-line analysis to infers the most relevant parameterization of an IO accelerator that should be used for the next launch of a similar job. This is thus a continuous improvement process that will converge toward an optimal parameterization along iterations.

The inference process uses a numerical optimization method to propose the parameterization that minimizes the execution time of the considered application. A regression method is used to model the objective function to be optimized from a sparse set of collected data from the past runs.

Experiments on different artificial parametric spaces show that the convergence speed of the proposed method requires less than 20 runs to converge toward a parameterization of the IO accelerator.

Keywords

HPC Supercomputing IO Optimization Regression Inference Machine learning Auto-tuning Parameterization Data management 

References

  1. 1.
    Bergman, K., et al.: Exascale computing study: technology challenges in achieving exascale systems. Defense Advanced Research Projects Agency Information Processing Techniques Office (DARPA IPTO), Technical report 15 (2008)Google Scholar
  2. 2.
    Abrahm, E., et al. Preparing HPC applications for exascale: challenges and recommendations. In: 2015 18th International Conference on Network-Based Information Systems (NBiS). IEEE (2015)Google Scholar
  3. 3.
    Gainaru, A., et al.: Failure prediction for HPC systems and applications: current situation and open issues. Int. J. High Perform. Comput. Appl. 27(3), 273–282 (2013)CrossRefGoogle Scholar
  4. 4.
  5. 5.
    Vu, K.K., D’Ambrosio, C., Hamadi, Y., Liberti, L.: Surrogate-based methods for blackbox optimization. Int. Trans. Oper. Res. 24, 393–424 (2017).  https://doi.org/10.1111/itor.12292MathSciNetCrossRefzbMATHGoogle Scholar
  6. 6.
    Mackay, D.J.C.: Bayesian interpolation. Neural Comput. J. 4, 415–447 (1992)CrossRefGoogle Scholar
  7. 7.
    Cortes, C., Vapnik, V.: Support vector networks. Mach. Learn. J. 20(3), 273–297 (1995)zbMATHGoogle Scholar
  8. 8.
    Rasmussen, C.E., Williams, C.K.I.: Gaussian Processes for Machine Learning. MIT press, Cambridge (2006)zbMATHGoogle Scholar
  9. 9.
    Nelder, J., Mead, R.: A simplex method for function minimization. Comput. J. 7, 308–313 (1965)MathSciNetCrossRefGoogle Scholar
  10. 10.
    Kennedy, J., Eberhart, R.: Particle swarm optimization. IEEE (1995)Google Scholar
  11. 11.
    Hansen, N., Müller, S.D., Koumoutsakos, P.: Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (CMA-ES). Evol. Comput. 11(1), 1–18 (2003)CrossRefGoogle Scholar
  12. 12.
    Jamil, M., Xin, X.-S.: A literature survey of benchmark functions for global optimisation problems. Int. J. Math. Model. Numer. Optim. 4, 150–194 (2013)zbMATHGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  1. 1.ATOS-Bull, BDS R&D-Software Data ManagementÉchirollesFrance

Personalised recommendations