Advertisement

Parametric generation of conditional geological realizations using generative neural networks

  • Shing ChanEmail author
  • Ahmed H. Elsheikh
Open Access
Original Paper
  • 34 Downloads

Abstract

Deep learning techniques are increasingly being considered for geological applications where—much like in computer vision—the challenges are characterized by high-dimensional spatial data dominated by multipoint statistics. In particular, a novel technique called generative adversarial networks has been recently studied for geological parametrization and synthesis, obtaining very impressive results that are at least qualitatively competitive with previous methods. The method obtains a neural network parametrization of the geology—so-called a generator—that is capable of reproducing very complex geological patterns with dimensionality reduction of several orders of magnitude. Subsequent works have addressed the conditioning task, i.e., using the generator to generate realizations honoring spatial observations (hard data). The current approaches, however, do not provide a parametrization of the conditional generation process. In this work, we propose a method to obtain a parametrization for direct generation of conditional realizations. The main idea is to simply extend the existing generator network by stacking a second inference network that learns to perform the conditioning. This inference network is a neural network trained to sample a posterior distribution derived using a Bayesian formulation of the conditioning task. The resulting extended neural network thus provides the conditional parametrization. Our method is assessed on a benchmark image of binary channelized subsurface, obtaining very promising results for a wide variety of conditioning configurations.

Keywords

Parametrization Deep learning Geological models Generative models Multipoint geostatistics 

Notes

References

  1. 1.
    Jacquard, P.: Permeability distribution from field pressure data. Soc. Pet. Eng.  https://doi.org/10.2118/1307-PA(1965)
  2. 2.
    Jahns, H. O.: A rapid method for obtaining a two-dimensional reservoir description from well pressure response data. Soc. Pet. Eng.  https://doi.org/10.2118/1473-PA (1966)
  3. 3.
    Sarma, P, Durlofsky, LJ, Aziz, K: Kernel principal component analysis for efficient, differentiable parameterization of multipoint geostatistics. Math. Geosci. 40(1), 3–32 (2008)Google Scholar
  4. 4.
    Ma, X, Zabaras, N: Kernel principal component analysis for stochastic input model generation. J. Comput. Phys. 230(19), 7311–7331 (2011)Google Scholar
  5. 5.
    Vo, HX, Durlofsky, LJ: Regularized kernel PCA for the efficient parameterization of complex geological models. J. Comput. Phys. 322, 859–881 (2016)Google Scholar
  6. 6.
    Shirangi, MG, Emerick, AA: An improved TSVD-based Levenberg–Marquardt algorithm for history matching and comparison with Gauss–Newton. J. Pet. Sci. Eng. 143, 258–271 (2016)Google Scholar
  7. 7.
    Tavakoli, R, Reynolds, AC: Monte Carlo simulation of permeability fields and reservoir performance predictions with SVD parameterization in RML compared with EnKF. Comput. Geosci. 15(1), 99–116 (2011)Google Scholar
  8. 8.
    Jafarpour, B., McLaughlin, D. B.: Reservoir characterization with the discrete cosine transform. Soc. Petrol. Eng.  https://doi.org/10.2118/106453-PA (2009)
  9. 9.
    Jafarpour, B, Goyal, VK, McLaughlin, DB, Freeman, WT: Compressed history matching: exploiting transform-domain sparsity for regularization of nonlinear dynamic data integration problems. Math. Geosci. 42(1), 1–27 (2010). ISSN 1874-8953.  https://doi.org/10.1007/s11004-009-9247-z Google Scholar
  10. 10.
    Moreno, D., Aanonsen, S. I.: Stochastic facies modelling using the level set method. In: EAGE Conference on Petroleum Geostatistics (2007)Google Scholar
  11. 11.
    Dorn, O, Villegas, R: History matching of petroleum reservoirs using a level set technique. Inverse Prob. 24(3), 035015 (2008). http://stacks.iop.org/0266-5611/24/i=3/a=035015 Google Scholar
  12. 12.
    Chang, H, Zhang, D, Lu, Z: History matching of facies distribution with the EnKF and level set parameterization. J. Comput. Phys. 229(20), 8011–8030 (2010). ISSN 0021-9991.  https://doi.org/10.1016/j.jcp.2010.07.005. http://www.sciencedirect.com/science/article/pii/S0021999110003748 Google Scholar
  13. 13.
    Khaninezhad, MM, Jafarpour, B, Li, L: Sparse geologic dictionaries for subsurface flow model calibration: part i. Inversion formulation. Adv. Water Resour. 39, 106–121 (2012)Google Scholar
  14. 14.
    Khaninezhad, MM, Jafarpour, B, Li, L: Sparse geologic dictionaries for subsurface flow model calibration: part ii. Robustness to uncertainty. Adv. Water Resour. 39, 122–136 (2012)Google Scholar
  15. 15.
    Goodfellow, I, Pouget-Abadie, J, Mirza, M, Bing, Xu, Warde-Farley, D, Ozair, S, Courville, A, Bengio, Y: Generative adversarial nets. In: Advances in Neural Information Processing Systems, pp 2672–2680 (2014)Google Scholar
  16. 16.
    Mosser, L, Dubrule, O, Blunt, MJ: Reconstruction of three-dimensional porous media using generative adversarial neural networks. arXiv:1704.03225 (2017)
  17. 17.
    Mosser, L, Dubrule, O, Blunt, MJ: Stochastic reconstruction of an oolitic limestone by generative adversarial networks. arXiv:1712.02854 (2017)
  18. 18.
    Chan, S, Elsheikh, AH: Parametrization and generation of geological models with generative adversarial networks. arXiv:1708.01810 (2017)
  19. 19.
    Laloy, E, Hérault, R, Jacques, D, Linde, N: Training-image based geostatistical inversion using a spatial generative adversarial neural network. Water Resour. Res. 54(1), 381–406 (2018)Google Scholar
  20. 20.
    Dupont, E, Zhang, T, Tilke, P, Liang, L, Bailey, W: Generating realistic geology conditioned on physical measurements with generative adversarial networks. arXiv:1802.03065 (2018)
  21. 21.
    Mosser, L, Dubrule, O, Blunt, MJ: Conditioning of three-dimensional generative adversarial networks for pore and reservoir-scale models. arXiv:1802.05622 (2018)
  22. 22.
    Chan, S, Elsheikh, AH: Parametrization of stochastic inputs using generative adversarial networks with application in geology. arXiv:1904.03677 (2019)
  23. 23.
    Marçais, J, de Dreuzy, J-R: Prospective interest of deep learning for hydrological inference. Groundwater 55(5), 688–692 (2017)Google Scholar
  24. 24.
    Nagoor Kani, J, Elsheikh, AH: DR-RNN: a deep residual recurrent neural network for model reduction. arXiv:1709.00939 (2017)
  25. 25.
    Klie, H, et al.: Physics-based and data-driven surrogates for production forecasting. In: SPE Reservoir Simulation Symposium. Society of Petroleum Engineers (2015)Google Scholar
  26. 26.
    Stanev, VG, Iliev, FL, Hansen, S, Vesselinov, VV, Alexandrov, BS: Identification of release sources in advection–diffusion system by machine learning combined with Green’s function inverse method. Appl. Math. Model. 60, 64–76 (2018)Google Scholar
  27. 27.
    Sun, W, Durlofsky, LJ: A new data-space inversion procedure for efficient uncertainty quantification in subsurface flow problems. Math. Geosci. 49(6), 679–715 (2017)Google Scholar
  28. 28.
    Zhu, Y, Zabaras, N: Bayesian deep convolutional encoder-decoder networks for surrogate modeling and uncertainty quantification. J. Comput. Phys. 366, 415–447 (2018)Google Scholar
  29. 29.
    Valera, M, Guo, Z, Kelly, P, Matz, S, Cantu, A, Percus, AG, Hyman, JD, Srinivasan, G, Viswanathan, HS: Machine learning for graph-based representations of three-dimensional discrete fracture networks. arXiv:1705.09866 (2017)
  30. 30.
    Strebelle, SB, Journel, AG: Reservoir modeling using multiple-point statistics. In: SPE Annual Technical Conference and Exhibition. Society of Petroleum Engineers (2001)Google Scholar
  31. 31.
    Brock, A, Donahue, J, Simonyan, K: Large scale gan training for high fidelity natural image synthesis. arXiv:1809.11096 (2018)
  32. 32.
    Karras, T, Aila, T, Laine, S, Lehtinen, J: Progressive growing of gans for improved quality, stability, and variation. arXiv:1710.10196 (2017)
  33. 33.
    Schmidhuber, J: Learning factorial codes by predictability minimization. Neural Comput. 4(6), 863–879 (1992)Google Scholar
  34. 34.
    Radford, A, Metz, L, Chintala, S: Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv:1511.06434 (2015)
  35. 35.
    Salimans, T, Goodfellow, I, Zaremba, W, Cheung, V, Radford, A, Chen, X: Improved techniques for training gans. In: Advances in Neural Information Processing Systems, pp 2234–2242 (2016)Google Scholar
  36. 36.
    Arjovsky, M, Bottou, L: Towards principled methods for training generative adversarial networks. arXiv:1701.04862 (2017)
  37. 37.
    Arora, S, Ge, R, Liang, Y, Ma, T, Zhang, Y: Generalization and equilibrium in generative adversarial nets (GANs). arXiv:1703.00573 (2017)
  38. 38.
    Müller, A: Integral probability metrics and their generating classes of functions. Adv. Appl. Probab. 29(2), 429–443 (1997)Google Scholar
  39. 39.
    Gretton, A, Borgwardt, KM, Rasch, M, Schölkopf, B, Smola, AJ: A kernel method for the two-sample-problem. In: Advances in Neural Information Processing Systems, pp 513–520 (2007)Google Scholar
  40. 40.
    Dziugaite, GK, Roy, DM, Ghahramani, Z: Training generative neural networks via maximum mean discrepancy optimization. arXiv:1505.03906 (2015)
  41. 41.
    Arjovsky, M, Chintala, S, Bottou, L: Wasserstein GAN. arXiv:1701.07875 (2017)
  42. 42.
    Gulrajani, I, Ahmed, F, Arjovsky, M, Dumoulin, V, Courville, AC: Improved training of Wasserstein GANs. In: Advances in Neural Information Processing Systems, pp 5769–5779 (2017)Google Scholar
  43. 43.
    Mroueh, Y, Sercu, T: Fisher GAN. In: Advances in Neural Information Processing Systems, pp 2510–2520 (2017)Google Scholar
  44. 44.
    Mroueh, Y, Li, C-L, Sercu, T, Raj, A, Cheng, Y: Sobolev GAN. arXiv:1711.04894(2017)
  45. 45.
    Mroueh, Y, Sercu, T, Goel, V: Mcgan: mean and covariance feature matching GAN. arXiv:1702.08398 (2017)
  46. 46.
    Kozachenko, L F, Leonenko, NN: Sample estimate of the entropy of a random vector. Problemy Peredachi Informatsii 23(2), 9–16 (1987)Google Scholar
  47. 47.
    Goria, MN, Leonenko, NN, Mergel, VV, Inverardi, PLN: A new class of random vector entropy estimators and its applications in testing statistical hypotheses. J. Nonparametr. Stat. 17(3), 277–297 (2005)Google Scholar
  48. 48.
    Kingma, D, Ba, J: Adam: a method for stochastic optimization. arXiv:1412.6980 (2014)
  49. 49.
    Tieleman, T, Hinton, G: Lecture 6.5-RMSprop: divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning 4(2). https://www.cs.toronto.edu/~tijmen/csc321/slides/lecture_slides_lec6.pdf (2012)
  50. 50.
    Paszke, A, Gross, S, Chintala, S, Chanan, G, Yang, E, DeVito, Z, Lin, Z, Desmaison, A, Antiga, L, Lerer, A: Automatic differentiation in PyTorch. NIPS Autodiff Workshop (2017)Google Scholar
  51. 51.
    Strebelle, S: Conditional simulation of complex geological structures using multiple-point statistics. Math. Geol. 34(1), 1–21 (2002)Google Scholar
  52. 52.
    Remy, N, Boucher, A, Wu, J: Sgems: Stanford geostatistical modeling software. Software Manual (2004)Google Scholar
  53. 53.
    Tan, X, Tahmasebi, P, Caers, J: Comparing training-image based algorithms using an analysis of distance. Math. Geosci. 46(2), 149–169 (2014)Google Scholar
  54. 54.
    Borg, I, Groenen, P: Modern multidimensional scaling: theory and applications. J. Educ. Meas. 40(3), 277–280 (2003)Google Scholar
  55. 55.
    Otsu, N: A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man Cybern. 9(1), 62–66 (1979)Google Scholar
  56. 56.
    Klambauer, G, Unterthiner, T, Mayr, A, Hochreiter, S: Self-normalizing neural networks. In: Advances in Neural Information Processing Systems, pp 971–980 (2017)Google Scholar
  57. 57.
    Yeh, R, Chen, C, Lim, TY, Hasegawa-Johnson, M, Do, MN: Semantic image inpainting with perceptual and contextual losses. arXiv:1607.07539 (2016)
  58. 58.
    Ulyanov, D, Vedaldi, A, Lempitsky, V: Improved texture networks: maximizing quality and diversity in feed-forward stylization and texture synthesis. In: Proceedings of CVPR (2017)Google Scholar
  59. 59.
    Li, Y, Fang, C, Yang, J, Wang, Z, Lu, X, Yang, M-H: Diversified texture synthesis with feed-forward networks. In: Proceedings of CVPR (2017)Google Scholar
  60. 60.
    Kim, T, Bengio, Y: Deep directed generative models with energy-based probability estimation. arXiv:1606.03439 (2016)
  61. 61.
    Ioffe, S, Szegedy, C: Batch normalization: accelerating deep network training by reducing internal covariate shift. arXiv:1502.03167 (2015)
  62. 62.
    Rezende, DJ, Mohamed, S: Variational inference with normalizing flows. arXiv:1505.05770(2015)
  63. 63.
    Kingma, DP, Salimans, T, Jozefowicz, R, Chen, X, Sutskever, I, Welling, M: Improved variational inference with inverse autoregressive flow. In: Advances in Neural Information Processing Systems, pp 4743–4751 (2016)Google Scholar
  64. 64.
    Wang, D, Liu, Q: Learning to draw samples: with application to amortized mle for generative adversarial learning. arXiv:1611.01722 (2016)
  65. 65.
    Nguyen, A, Yosinski, J, Bengio, Y, Dosovitskiy, A, Clune, J: Plug & play generative networks: conditional iterative generation of images in latent space. arXiv:1612.00005 (2016)
  66. 66.
    Engel, J, Hoffman, M, Roberts, A: Latent constraints: learning to generate conditionally from unconditional generative models. arXiv:1711.05772 (2017)
  67. 67.
    Bengio, Y: Practical recommendations for gradient-based training of deep architectures. In: Neural Networks: Tricks of the Trade, pp 437–478. Springer (2012)Google Scholar
  68. 68.
    Reddi, SJ, Kale, S, Kumar, S: On the convergence of Adam and beyond. International Conference on Learning Representations (2018)Google Scholar
  69. 69.
    Fukushima, K, Miyake, S: Neocognitron: a self-organizing neural network model for a mechanism of visual pattern recognition. In: Competition and Cooperation in Neural Nets, pp 267–285. Springer (1982)Google Scholar
  70. 70.
    LeCun, Y, Boser, B, Denker, JS, Henderson, D, Howard, RE, Hubbard, W, Jackel, LD: Backpropagation applied to handwritten zip code recognition. Neural Comput. 1(4), 541–551 (1989)Google Scholar
  71. 71.
    Dumoulin, V, Visin, F: A guide to convolution arithmetic for deep learning. arXiv:1603.07285 (2016)
  72. 72.
    Shahriari, B, Swersky, K, Wang, Z, Adams, RP, De Freitas, N: Taking the human out of the loop: a review of Bayesian optimization. Proc. IEEE 104(1), 148–175 (2016)Google Scholar
  73. 73.
    Zoph, B, Le, QV: Neural architecture search with reinforcement learning. arXiv:1611.01578(2016)

Copyright information

© The Author(s) 2019

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors and Affiliations

  1. 1.Heriot-Watt UniversityEdinburghUK

Personalised recommendations