Lobachevskii Journal of Mathematics

, Volume 39, Issue 9, pp 1277–1286 | Cite as

Modifying Texture of a Photograph Object, with the Use of Neural Networks Ensemble

  • R. VasilyevEmail author
  • V. Amelin
  • Yu. Rashchenko
  • D. Rashchenko
Part 1. Special issue “High Performance Data Intensive Computing” Editors: V. V. Voevodin, A. S. Simonov, and A. V. Lapin


In this paper, we consider a problem-solving technique of the texture changing of an object in a photograph. The task in hand is relevant in the field of intelligent image processing and has a number of practical applications. The solutions to this problem have been proposed in a number of works devoted to the neural network style transfer approach, but they have a number of limitations. Examples of the limitations are as follows: the texture transfer selectivity absence (the image is changed entirely), the target texture distortion with the heterogeneity of the original one, the initial illumination distortion of the object, and the absence of the photographic realism of the resulting image.

To solve the problems mentioned above, in this paper we propose a sequential image processing with the use of several neural networks types: segmental, stylizing and generative-adversarial (GAN) ones. The reliable transfer problem of an object illumination is proposed to be solved by the joint work of GAN and methods that do not use the neural network approach.

In the context of this paper we developed an algorithm that allows solving the texture transferring task with completely or partially elimination of the listed problems of classical methods. Its high quality of work is shown with the maintaining productivity acceptable for common use. Demonstration of the algorithm is performed on the task of a virtual furniture dust covers fitting (sofas, armchairs). In addition to the algorithm itself, this work includes an enumeration of some heuristics and limitations stated during its implementation and application.

Keywords and phrases

artificial neural network image processing CycleGAN style transfer segmentation 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    A. A. Efros and T. K. Leung, “Texture synthesis by non-parametric sampling,” in Proceedings of the 7th IEEE International Conference on Computer Vision (1999), pp. 1033–1038. Scholar
  2. 2.
    L. A. Gatys, A. S. Ecker, and M. Bethge, “Texture synthesis using convolutional neural networks,” arxiv:1505. 07376 (2015).Google Scholar
  3. 3.
    L. A. Gatys, A. S. Ecker, and M. A. Bethge, “Neural algorithm of artistic style,” arxiv:1508. 06576 (2015).Google Scholar
  4. 4.
    F. Luan, S. Paris, E. Shechtman, and K. Bala, “Deep photo style transfer,” arxiv:1703. 07511 (2017).Google Scholar
  5. 5.
    J. Johnson, A. Alahi, and L. Fei-Fei, “Perceptual losses for real-time style transfer and super-resolution,” arxiv:1603. 08155 (2016).Google Scholar
  6. 6.
    D. Ulyanov, V. Lebedev, A. Vedaldi, and V. Lempitsky, “Texture Networks: Feed-forward Synthesis of Textures and Stylized Images,” arxiv:1603. 03417 (2016).Google Scholar
  7. 7.
    P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-Image Translation with Conditional Adversarial Networks,” arxiv:1611. 07004 (2017).Google Scholar
  8. 8.
    J.-Y. Zhu, T. Park, P. Isola, and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” arxiv:1703. 10593 (2018).Google Scholar
  9. 9.
    S. Zheng, S. Jayasumana, B. Romera-Paredes, V. Vineet, Z. Su, D. Du, C. Huang, and P. H. S. Torr, “Conditional random fields as recurrent neural networks,” arxiv:1502. 03240 (2016).Google Scholar
  10. 10.
    P. O. Pinheiro, R. Collobert, and P. Dollar, “Learning to segment object candidates,” arxiv:1506. 06204 (2015).Google Scholar
  11. 11.
    K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” arxiv:1512. 03385 (2015).Google Scholar
  12. 12.
    S. Ioffe and C. Szegedy, “Batch normalization: accelerating deep network training by reducing internal covariate shift,” arxiv:1502. 031679 (2015).Google Scholar
  13. 13.
    K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” arxiv:1409. 1556 (2015).Google Scholar

Copyright information

© Pleiades Publishing, Ltd. 2018

Authors and Affiliations

  • R. Vasilyev
    • 1
    Email author
  • V. Amelin
    • 2
  • Yu. Rashchenko
    • 3
  • D. Rashchenko
    • 3
  1. 1.Moscow Institute of Physics and Technology (State University)Dolgoprudnyi, Moscow oblastRussia
  2. 2.Lomonosov Moscow State UniversityMoscowRussia
  3. 3.St. Petersburg State UniversitySt. PetersburgRussia

Personalised recommendations