Stability of the Deep Neural Networks Learning Process in the Recognition Problems of the Material Microstructure

  • A. V. Klyuev
  • V. Yu. Stolbov
  • M. B. Gitman
  • R. A. KlestovEmail author
Conference paper
Part of the Lecture Notes in Networks and Systems book series (LNNS, volume 78)


The paper investigates the algorithmic stability of learning a deep neural network in problems of recognition of the materials microstructure. It is shown that at 8% of quantitative deviation in the basic test set the algorithm trained network loses stability. This means that with such a quantitative or qualitative deviation in the training or test sets, the results obtained with such trained network can hardly be trusted.

Although the results of this study are applicable to the particular case, i.e. problems of recognition of the microstructure using ResNet-152, the authors propose a cheaper method for studying stability based on the analysis of the test, rather than the training set.


Deep neural networks Material microstructure Image recognition Deep learning Algorithmic stability 



The reported study was funded by the Ministry of Science and Higher Education of the Russian Federation (the unique identifier RFMEFI58617X0055) and by the EC Horizon 2020 is MSCA-RISE-2016 FRAMED Fracture across Scales and Materials, Processes and Disciplines. The authors are grateful to the staff of the Institute of Nanosteels of MSTU named after G.I. Nosov, in particular to M. P. Baryshnikov, for the experimental data provided, which made it possible to train the constructed neural network with a given accuracy.


  1. 1.
    Gitman, M.B., Klyuev, A.V., Stolbov, V.Y., Gitman, I.M.: Complex estimation of strength properties of functional materials on the basis of the analysis of grain-phase structure parameters. Strength Mater. 49(5), 710–717 (2017)CrossRefGoogle Scholar
  2. 2.
    Kliuev, A., Klestov, R., Bartolomey, M., Rogozhnikov, A.: Recommendation system for material scientists based on deep learn neural network. In: Antipova, T., Rocha, A. (eds.) Digital Science. DSIC 2018. Advances in Intelligent Systems and Computing, vol. 850, pp. 216–223, Budva (2019)Google Scholar
  3. 3.
    Vapnik, V.N., Chervonenkis, A. Ya.: Teoriya raspoznavaniya obrazov, p. 416. M.: Nauka (1974)Google Scholar
  4. 4.
    Bousquet, O., Elisseeff, A.: Algorithmic stability and generalization performance. Adv. Neural Inf. Proc. Syst. 13, 196–202 (2001)Google Scholar
  5. 5.
    Bousquet, O., Elisseeff, A.: Stability and generalization. J. Mach. Learn. Res. 2, 499–526 (2002)MathSciNetzbMATHGoogle Scholar
  6. 6.
    Brownlee, J. Impact of dataset size on deep learning model skill and performance estimates [Digital resource]. (2019). Accessed 4 Feb 2019
  7. 7.
    Cho, J., Lee, K., Shin, E., Choy, G., Do, S.: How much data is needed to train a medical image deep learning system to achieve necessary high accuracy? arXiv:1511.06348 (2015) (preprint arXiv)
  8. 8.
    Klestov, R., Klyuev, A., Stolbov, V.: About some approaches to problem of metals and alloys microstructures classification based on neural network technologies. Adv. Eng. Res. (AER) 157, 292–296 (2018)Google Scholar
  9. 9.
    Gitman, I.M., Klyuev, A.V., Gitman, M.B., Stolbov, V.Yu.: Multi-scale approach for strength properties estimation in functional materials. ZAMM Z. Angew. Math. Mechanik 98(6), 945–953 (2018)MathSciNetCrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  • A. V. Klyuev
    • 1
  • V. Yu. Stolbov
    • 1
  • M. B. Gitman
    • 1
  • R. A. Klestov
    • 1
    Email author
  1. 1.Perm National Research Polytechnic UniversityPermRussia

Personalised recommendations