The Main Scientific and Technical Problems of Using Hybrid HPC Clusters in Materials Science


The article discusses the use of hybrid HPC clusters for the execution of software designed to calculate the electronic structure and atomic scale materials modeling. Modern software systems, which are designed to solve the problems of materials science, use the capabilities of various hardware computing accelerators to increase productivity. The use of such computing technologies requires the adaptation of application program code to hybrid computing architectures, which include classic central processing units (CPUs) and specialized graphics accelerators (GPUs). The use of large computing hybrid systems requires the development of methods for ensuring the workloading of such computing systems that will allow efficient use of computing resources and avoid equipment downtime. First of all, these methods should allow parallel execution of user applications using computational accelerators. However, in practice, software environments designed to solve application problems cannot be deployed in the same computing environment due to software incompatibility. In order to overcome this limitation and ensure the parallel execution of diverse types of materials science tasks, the creation of individual task execution environments based on virtualization technologies and cloud technologies. The continuation of virtualization technologies and the provision of cloud services is the construction of digital platforms. The article proposes the use of a digital platform for hosting scientific materials science services that provide calculations using various application software systems. Digital platforms make it possible to provide a unified user interface to scientific materials science services. The platform provides opportunities for finding the necessary scientific services, transferring source data and results between users, the platform and hybrid high-performance clusters.

This is a preview of subscription content, access via your institution.

Fig. 1.
Fig. 2.


  1. 1

    Abramov, S.M. and Lilitko, E.P., Current state and development prospects of high-end HPC system, Inform. Technol. Comput. Syst., 2013, no. 2, pp. 6–22.

  2. 2

    Zhuravlev, A.A., Reviznikov, D.L., and Abgaryan, K.K., The method of discrete elements with an atomic structure, in Proceedings of the 21st International Conference on Computational Mechanics and Modern Applied Software Systems (VMSPSPS'2019), Moscow, 2019, pp. 59–61.

  3. 3

    Mikurova, V.S., Starlings. Creating a generalized model for predicting the inhibition of influenza virus neuraminidase of various strains, Biomed. Chem., 2018, vol. 64, no. 3, pp. 247–252.

    Google Scholar 

  4. 4

    Mikurova, V.S. and Skvortsov, O.A., Raevsky computer assessment of the selectivity of inhibition of muscarinic receptors M1-M4, Biomed. Chem.: Res.Methods, 2018, vol. 1, no. 3.

  5. 5

    Vouzis, P.D. and Sahinidis, N.V., GPU-BLAST: using graphics processors to accelerate protein sequence alignment, Bioinformatics, 2011, vol. 27, no. 2, pp. 182–188.

    Article  Google Scholar 

  6. 6

    Gorchakov, A.Yu. and Malkova, V.U., Comparison of Intel Core-I7, Intel Xeon, Intel Xeon Phi and IBM power 8 processors using the example of initial data recovery, Int. J. Open Inform. Technol., 2018, vol. 6, no. 4.

  7. 7

    Volkov, S. and Sukhoroslov, O., Simplifying the use of clouds for scientific computing with Everest, Proc. Comput. Sci., 2017, vol. 119, pp. 112–120.

    Article  Google Scholar 

  8. 8

    Gorchakov, A.Yew., Using openmp to implement the multithreaded method of uneven coatings, in Advanced Information Technologies (PIT 2018), Proceedings of the International Scientific and Technical Conference, 2018, pp. 613–617.

  9. 9

    Volovich, K.I., Denisov, S.A. and Malkovsky, S.I., The formation of an individual modeling environment in a hybrid high-performance computing complex, in Proceedings of the 1st International Conference on Mathematical Modeling in Materials Science of Electronic Components MMMEC-2019, Moscow: MAX Press, 2019, pp. 21–24.

  10. 10

    Afanasyev, I. and Voevodin, V., The comparison of large-scale graph processing algorithms implementation methods for Intel KNL and NVIDIA GPU, Commun. Comput. Inform. Sci., 2017, vol. 793, pp. 80–94.

    Article  Google Scholar 

  11. 11

    Ding, F., Mey, D., Wienke, S., Zhang, R., and Li, L., A study on today’s cloud environments for HPC applications, in Proceedings of the 3rd International Conference on Cloud Computing and Services Science CLOSER 2013, Aachen, Germany, May 8–10, 2013, Berlin: Springer, 2014, pp. 114–127.

  12. 12

    Volovich, K.I., Zatsarinnyy, A.A., Kondrashev, V.A., and Shabanov, A.P., Scientific research as a cloud service, Sist. Sredstva Inform., 2017, vol. 27, no. 1, pp. 73–84.

    Google Scholar 

  13. 13

    Zatsarinny, A.A., Gorshenin, A.K., Kondrashev, V.A., Volovich, K.I., and Denisov, S.A., Toward high performance solutions as services of research digital platform, in Proceedings of the 13th International Symposium on Intelligent Systems INTELS’18, St. Petersburg, Russia, October 22–24, 2018, Elsevier Proc. Comput. Sci., 2019, no. 150, pp. 622–627.

  14. 14

    Zatsarinny, A.A., Gorshenin, A.K., Volovich, K.I., Kolin, K.K., Kondrashev, V.A., and Stepanov, P.V., Management of scientific services as the basis of the national digital platform “Science and Education,” Strateg. Priorities, 2017, no. 2 (14), pp. 103–113.

  15. 15

    Kondrashev, V.A. and Volovich, K.I., Service management of a digital platform on the example of high-performance computing services, in Proceedings of the International Scientific Conference, Voronezh, September 3–6, 2018.

  16. 16

    Kartsev, A., Malkovsky, S.I., Volovich, K.I., and Sorokin, A.A., The study of the performance and scalability of the Quantum ESPRESSO package in the study of low-dimensional systems on hybrid computing systems, in Proceedings of the 1st International Conference on Mathematical Modeling in Materials Science of Electronic Components MMMEC-2019, Moscow: MAX Press, 2019, pp. 18–21.

  17. 17

    Berriman, G.B., Deelman, E., Juve, G., Rynge, M., and Vockler, J.-S., The application of cloud computing to scientific workflows: A study of cost and performance, Phil. Trans. R. Soc. London, Ser. A, 2013, vol. 371, p. 20120066. Iss. 1983.

  18. 18

    Yakobovsky, M.V., Bondarenko, A.A., Vyrodov, A.V., Grigoryev, S.K., Kornilina, M.A., Plotnikov, A.I., Polyakov, S.V., Popov, I.V., Bubbles, D.V., and Sukov, S.A., Cloud service for solving multiscale nanotechnology problems on clusters and supercomputers, News SFU, Tech. Sci., 2016, no. 12 (185).

  19. 19

    Gorchakov, A.Yu. and Posypkin, M.A., Comparison of multi-threaded implementations of the branch and bound method for multicore systems, Mod. Inform. Technol. IT Educ., 2018, vol. 14, no. 1, pp. 138–148.

    Google Scholar 

  20. 20

    Regulations of CKP ‘Informatics.’ http://www. Accessed January 22, 2020.

Download references


The experiments on the deployment of individual runtime environments for software packages of materials science were carried out using computing resources of shared research facilities CKP “Informatics” of FRC CSC RAS [20].

Author information



Corresponding authors

Correspondence to K. I. Volovich or S. A. Denisov.

Ethics declarations

The research is partially supported by the Russian Foundation for Basic Research (projects 18-29-03100, 19-29-03051).

Additional information

This article was prepared based on a report presented at the 1st International Conference on “Mathematical Modeling in Materials Science of Electronic Components” (Moscow, 2019).

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Volovich, K.I., Denisov, S.A. The Main Scientific and Technical Problems of Using Hybrid HPC Clusters in Materials Science. Russ Microelectron 49, 574–579 (2020).

Download citation


  • high-performance computing cluster
  • hybrid architecture
  • graphics accelerator
  • electronic structure calculations, quantum-mechanical molecular dynamics, VASP, Quantum ESPRESSO