# Parallel Shared-Memory Isogeometric Residual Minimization (iGRM) for Three-Dimensional Advection-Diffusion Problems

- 133 Downloads

## Abstract

In this paper, we present a residual minimization method for three-dimensional isogeometric analysis simulations of advection-diffusion equations. First, we apply the implicit time integration scheme for the three-dimensional advection-diffusion equation. Namely, we utilize the Douglas-Gunn time integration scheme. Second, in every time step, we apply the residual minimization method for stabilization of the numerical solution. Third, we use isogeometric analysis with B-spline basis functions for the numerical discretization. We perform alternating directions splitting of the resulting system of linear equations, so the computational cost of the sequential LU factorization is linear \(\mathcal{O}(N)\). We test our method on the three-dimensional simulation of the advection-diffusion problem. We parallelize the solver for shared-memory machine using the GALOIS framework.

## Keywords

Isogeometric analysis Implicit dynamics Advection-diffusion problems Linear computational cost Direct solvers GALOIS framework## 1 Introduction

The alternating direction implicit method (ADI) is a popular method for performing finite difference simulations on regular grids. The first papers concerning the ADI method were published in 1960 [1, 3, 5, 19]. This method is still popular for fast solutions of different classes of problems with finite difference method [8, 9]. In its basic version, the method introduces intermediate time steps, and the differential operator splits into the *x*, *y* (and *z* in 3D) components. As a result of this operation, on the left-hand side, we only deal with derivatives in one direction, while the rest of the operator is on the right-hand side. The resulting system of linear equations has a multi-diagonal form, so the factorization of this system is possible with a linear \(\mathcal{O}(N)\) computational cost. It is a common misunderstanding that the direction splitting solvers are limited to simple geometries. They can be also applied to discretizations in extremely complicated geometries, as described in [10].

In this paper, we generalize this method for three-dimensional simulations of the time-dependent advection-diffusion problem with the residual minimization method. We use the basic version of the direction splitting algorithm, working on a regular computational cube, since this approach is straightforward and it is enough to proof our claims that the residual minimization stabilizes the advection-diffusion simulations. In particular, we apply the residual minimization method with isogeometric finite element method simulations over a three-dimensional cube shape computational grids with tensor product B-spline basis functions. The resulting system of linear equations can be factorized in a linear \(\mathcal{O}(N)\) computational cost when executed in sequential mode.

We use the finite element method discretizations with B-spline basis functions. This setup, as opposed to the traditional finite difference discretization, allows us to apply the residual minimization method to stabilize our simulations.

The isogeometric analysis (IGA) [4] is a modern method for performing finite element method (FEM) simulations with B-splines and NURBS. In enables higher order and continuity B-spline based approximations of the modeled phenomena. The direction splitting method has been rediscovered to solve the isogeometric \(L^2\) projection problem over regular grids with tensor product B-spline basis functions [6, 7]. The direction splitting, in this case, is performed with respect to space, and the splitting is possible by exploiting the Kronecker product structure of the Gram matrix with tensor product structure of the B-spline basis functions. The \(L^2\) projections with IGA-FEM were applied for performing fast and smooth simulations of explicit dynamics [11, 12, 13, 14, 15, 16, 20]. This is because the explicit dynamics with isogeometric discretization is equivalent to the solution of a sequence of isogeometric \(L^2\) projections.

In this paper, we focus on the advection-diffusion equation used for simulation of the propagation of a pollutant from a chimney. We introduce implicit time integration scheme, that allows for the alternating direction splitting of the advection-diffusion equation. We discover that the numerical simulations are unstable, and deliver some unexpected oscillations and reflections. Next, we utilize the residual minimization method in a way that it preserves the Kronecker product structure of the matrix and enables stabilized linear computational cost solutions.

*V*to \(V_h\). The optimality of the method depends on the quality of the polynomial test functions defining the space \(V_h = {\text {span}}\{ v_h \}\) and how far are they from the supremum defined in (1). There are many method for stabilization of different PDEs [28, 29, 30, 31]. In 2010, the Discontinuous Petrov Galerkin (DPG) method was proposed, with the modern summary of the method described in [32].

*G*over the test space, the two blocks with the actual weak form

*B*and \(B^T\), and the zero block 0. The test space is larger than the trial space, and the inner product and the weak form blocks are rather sparse matrices. Therefore, the dimension of the system of linear equations is at least two times larger than the original system of equations arising from standard Galerkin method. In the DPG method, the test space is broken in order to obtain a block-diagonal matrix

*G*and the Schur complements can be locally computed over each finite element. The price to pay is the presence of the additional fluxes on the element interfaces, resulting from breaking the test spaces, so the system over each finite element looks like

In this paper, we want to avoid dealing with fluxes and broken spaces since it is technically very complicated. Thus, we stay with the unbroken global system (4) and then we have to face one of the two possible methods. The first one would be to apply adaptive finite element method, but then the cost of factorization in 3D would be up to four times slower than in the standard finite element method and broken DPG (without the static condensation). This is because depending on the structure of the refined mesh, we will have a computational cost of the multi-frontal solver varying between \(\mathcal{O}(N)\) to \(\mathcal{O}(N^2)\) [33], and our *N* is two times bigger than in the original weak problem, and \(2^2=4\). This could be an option that we will discuss in a future paper.

Another method that we exploit in this paper is to keep a tensor product structure of the computational patch of elements with tensor product B-spline basis functions, decompose the system matrix into a Kronecker product structure, and utilize a linear computational cost alternating directions solver. Even for the system (4) resulting from the residual minimization we successfully perform direction splitting to obtain a Kronecker product structure of the matrix to maintain the linear computational cost of the alternating directions method.

In order the stabilize the time-dependent advection-diffusion simulations, we perform the following steps. First, we apply the time integration scheme. We use the Douglas-Gunn second order time integration scheme [2]. Second, we stabilize a system from every time step by employing the residual minimization method [34, 35, 36]. Finally, we perform numerical discretization with isogeometric analysis [4], using tensor product B-spline basis functions over a three-dimensional cube shape patch of elements.

The novelties of this paper with regard to our previous work are the following. In [11], we described parallel object-oriented JAVA based implementation of the explicit dynamics version of the alternating directions solver, without any residual minimization stabilization, and for two-dimensional problems only. In [12], we described sequential Fortran based implementation of the explicit dynamics solver, with applications of the elastic wave propagation, without implicit time integration schemes and any residual minimization stabilization. In [16], we described the parallel distributed memory implementation of the explicit dynamics solver, again without implicit time integration scheme and residual minimization method. In [14], we described the parallel shared-memory implementation of the explicit dynamics solver, with the same restrictions as before. In [13, 17] we applied the explicit dynamics solver for two and three-dimensional tumor growth simulations. In all of these papers, we did not used implicit time integration schemes, and we did not perform operator splitting on top of the residual minimization method. In [20], we investigate different time integration schemes for two-dimensional residual minimization method for advection-diffusion problems. We do not go for three-dimensional computations, and we do not apply parallel computations there.

In this paper, we apply the residual minimization with direction splitting for the first time in three-dimensions. We also investigate the parallel scalability of our solver, using the GALOIS framework for parallelization. For more details on the GALOIS framework itself, we refer to [21, 22, 23, 24].

The structure of this paper is the following. We start in Sect. 2 with the derivation of the isogeometric alternating direction implicit method for the advection-diffusion problem. The following Sect. 3 derives the residual minimization method formulation of the advection-diffusion problem in three-dimensions. Next, in Sect. 4, we present the linear computational cost numerical results. We summarize the paper with conclusions in Sect. 5.

## 2 Model Problem of Three-Dimensional Advection-Diffusion

*linear advection-diffusion equation*

## 3 Isogeometric Residual Minimization Method

*V*is defined as

*V*

*B*and \(B^T\) can be split according to (9), and the inner product (14) part

*G*can be split in the following way:

*x*,

*y*, or

*z*, respectively.

Now, in the first sub-step, we approximate the solution with tensor product of one dimensional B-splines basis functions of order *p*, \(u_h = \sum _{i,j,k} u_{i,j,k} B^x_{i;p}(x)B^y_{j;p}(y)B^z_{k;p}(z)\). We test with tensor product of one dimensional B-splines basis functions, where we enrich the order in the direction of the *x* axis from *p* to \(o \ge p\), and we enrich the test space only in the direction of the alternating splitting \(v_m \leftarrow B^x_{i;o}(x)B^y_{j;p}(y)B^z_{k;p}(z)\). We approximate the residual with tensor product of one dimensional B-splines basis functions of order *p*, \(r_m = \sum _{s,t,q} r_{s,t,q} B^x_{s;t}(x)B^y_{t;p}(y)B^z_{t;p}(z)\), and we test with tensor product of 1D B-spline basis functions of order *o* and *p*, in the corresponding directions \(w_h \leftarrow B^x_{k;o}(x)B^y_{l;p}(y)B^z_{m;p}(z)\).

## 4 Numerical Results

### 4.1 Manufactured Solution Problem

*f*(

*x*,

*y*,

*z*;

*t*) in such a way that it delivers the manufactured solution of the form \(u_{exact}(x,y,z;t)=\sin (\pi x)\sin (\pi y)\sin (\pi z)\sin (\pi t)\) on a time interval [0, 2].

We solve the problem with residual minimization method on \(32 \times 32 \times 32\) mesh with different time steps, as presented in Fig. 1, using the Douglas-Gunn time integration scheme and the direction splitting solver using the Kronecker product structure of the matrices.

We compute the error between the exact solution \(u_{exact}\) and the numerical solution \(u_h\). We present the comparisons with different time step size \(\tau \). We compute relative error \({\Vert }u_{\text {exact}}(t) - u_{\text {h}}(t){\Vert }_{L^2} / {\Vert }u_{\text {exact}}(t){\Vert }_{L^2} \cdot 100 \%\) and plot it in Fig. 1. The horizontal lines represent the time step size selected for the entire simulation, and the vertical lines present the numerical error with respect to the known exact solution.

The Douglas-Gunn scheme is of the second order accurate, down to the accuracy of \(10^{-5}\).

### 4.2 Pollution Propagation Simulations

*x*,

*y*, and

*z*. We apply the alternating direction implicit solver with three intermediate time steps. The velocity field is \(\beta =(\beta ^x(t),\beta ^y(t),\beta ^z(t))=(\cos a(t),\sin a(t),v(t))\) where \(a(t)=\frac{\pi }{3} (\sin (s) + \frac{1}{2} \sin (2.3 s)) + \frac{3}{8} \pi \), \(v(t)=\frac{1}{3} \sin (s)\), \(s=\frac{t}{150}\). The source is given by \(f(p) = (r - 1)^2 (r + 1)^2\), where \(r = \min (1, (|p - p_0| / 25)^2)\),

*p*represents the distance from the source, and \(p_0\) is the location of the source \(p_0=(3,3,2)\). The initial state is defined as the constant concentration of the order of \(10^{-6}\) in the entire domain (numerical zero).

The physical meaning of this setup is the following. We model the propagation of the pollutant generated by a single source modeled by the *f* function, distributed by the wind blowing with changing directions, modeled by \(\beta \) function, and the diffusion phenomena modeled by the coefficients \(\alpha \) The computational domain unit is meter [*m*], the wind velocity \(\beta \) is given in meters per second \([\frac{m}{s}]\), and the diffusion coefficient \(\alpha \) is given in square meters per second \([\frac{m^2}{s}]\). The units for the solution are then kilograms per cube meter \([\frac{kg}{m^3}]\). We expect from the numerical results to observe the propagation of the pollutant as distributed by the wind and the diffusion process.

Our first numerical results concern the computational mesh with a size of \(50\times 50 \times 50\) elements with quadratic B-splines. We employ standard Galerkin formulation here with direction splitting, without the residual minimization method. We perform 300 time steps of the numerical simulation. The snapshots presented in Fig. 2 represent time steps 100, 200 and 300. We observe unexpected “oscillations” and “reflections”. Since the simulation is supposed to model the propagation of the pollutant from a chimney by means of the advection (wind) and diffusion phenomena, the oscillations and reflections on the boundary are not expected there. Both these phenomena appear and disappear during the entire simulation; they do not cause a blowup of the entire simulations, just unexpected local behavior.

We use the implicit extension of the parallel code [14] for shared memory Linux cluster nodes. The total simulation time was 100 min on a laptop with i7 6700Q processor 2.6 GHz (8 cores with HT) and 16B or RAM. We emphasize that ADI is not an iterative solver. It is just a linear \(\mathcal{O}(N)\) computational cost solver that performs Gaussian elimination for matrices having Kronecker product structure. Thus, the solution obtained by the solver is exact (up to the round-off errors). In this sense, we do not present the iterations or convergence of the ADI solver, since it is executed once per each time step. In other words, we can perform 300 Gaussian elimination, each with 1,000,000 unknowns, with the high accuracy resulting from the ADI direct solver (only round-off errors are involved), on a laptop with eight cores, with the implicit method, within 1.5 h.

## 5 Parallel Scalability

Implementation of the ideas described in the preceding sections has been created in C++ and parallelized using our code for IGA-FEM simulations with ADI solver [14], extended to the implicit method. We use the GALOIS framework for parallelization [21, 22, 23, 24].

*Galois::for_each, Galois::Runtime::LL::SimpleLock*.

For linear B-splines for trial and quadratic B-splines for testing and large grids \(32\times 32\times 32\) and \(64\times 64\times 64\) the speedup grows up to 16 cores. It is around 10–11 for 16 cores. The corresponding efficiency for 16 cores is around 0.7. Then, for 32 cores the speedup went down since for more than 20 cores used the hyperthreading is utilized.

For quadratic B-splines for trial and cubic B-splines and large grids \(32\,\times \,32\,\times \,32\) and \(64\,\times \,64\,\times \,64\) the speedup grows up to 16 cores. It is around 12–14 for 16 cores. The corresponding efficiency for 16 cores is around 0.8–0.9. Then, for 32 cores and \(32\times 32\times 32\) mesh the speedup grows up to 17, and for \(64\times 64\times 64\) mesh is decreases slightly since for more than 20 cores the hyperthreading is used.

For cubic B-splines for trial and quartic B-splines for testing and large grids \(32\,\times \,32\,\times \,32\) and \(64\,\times \,64\,\times \,64\) the speedup grows up to 32 cores. It is around 15 for 16 cores (near perfect speedup) and around 20 for 32 cores, where we use the hyperthreading (more than 20 cores). The corresponding efficiency for 16 cores is around 0.9–1.0. Then, for 32 cores the efficiency decreases slightly down to 0.6–0.7.

Increasing the mesh size increases the parallel scalability up to \(32\times 32 \times 32\) mesh. Larger mesh, \(64\times 64\times 64\) performs slightly worse than \(32\times 32 \times 32\) mesh.

The most interesting observation is that while increasing the B-splines order we observe the improvement of the parallel scalability. This is important from the point of view of the stabilization with the residual minimization method. The order of B-splines in the test space is increased to enforce the stabilization, and when we increase the order to obtain the stabilization, we also improve the parallel scalability.

## 6 Conclusions

We introduced an isogeometric finite element method for an implicit simulations of the advection-diffusion problem with Douglas-Gunn time-integration scheme that results in a Kronecker product structure of the matrix in every time step. The application of B-spline basis functions for the approximation of the numerical solutions results in a smooth, higher order approximation of the solution. It also enables for the residual minimization stabilization with a linear computational cost \(\mathcal{O}(N)\) of the direct solver. The method has been verified on a three-dimensional advection-diffusion problem. Our future work will involve the extension of the model to more complicated equations and geometries. In particular, we plan to use the isogeometric alternating direction implicit solver for tumor growth simulations in two- and three-dimensions [13, 17]. Our equations can also be extended to model a pollution problem, with different chemical components, propagating and reacting together through space in time, as described in [18].

## Notes

### Acknowledgments

The work of Maciej Paszyński, Marcin Łoś and visit of Judit Muñoz-Matute at AGH have been supported by National Science Centre, Poland grant no. 2017/26/M/ST1/00281. The J. Tinsley Oden Faculty Fellowship Research Program at the Institute for Computational Engineering and Sciences (ICES) of the University of Texas at Austin has supported the visit of Maciej Paszyński to ICES.

## References

- 1.Peaceman, D.W., Rachford Jr., H.H.: The numerical solution of parabolic and elliptic differential equations. J. Soc. Ind. Appl. Math.
**3**, 28–41 (1955)MathSciNetzbMATHGoogle Scholar - 2.Douglas, J., Gunn, J.E.: A general formulation of alternating direction methods. Numer. Math.
**6**(1), 428–453 (1964)MathSciNetzbMATHGoogle Scholar - 3.Birkhoff, G., Varga, R.S., Young, D.: Alternating direction implicit methods. Adv. Comput.
**3**, 189–273 (1962)MathSciNetzbMATHGoogle Scholar - 4.Cottrell, J.A., Hughes, T.J.R., Bazilevs, Y.: Isogeometric Analysis: Towards Unification of CAD and FEA. Wiley, Hoboken (2009)zbMATHGoogle Scholar
- 5.Douglas, J., Rachford, H.: On the numerical solution of heat conduction problems in two and three space variables. Trans. Am. Math. Soc.
**82**, 421–439 (1956)MathSciNetzbMATHGoogle Scholar - 6.Gao, L., Calo, V.M.: Fast isogeometric solvers for explicit dynamics. Comput. Methods Appl. Mech. Eng.
**274**(1), 19–41 (2014)MathSciNetzbMATHGoogle Scholar - 7.Gao, L., Calo, V.M.: Preconditioners based on the alternating-direction-implicit algorithm for the 2D steady-state diffusion equation with orthotropic heterogeneous coefficients. J. Comput. Appl. Math.
**273**(1), 274–295 (2015)MathSciNetzbMATHGoogle Scholar - 8.Guermond, J.L., Minev, P.: A new class of fractional step techniques for the incompressible Navier-Stokes equations using direction splitting. C.R. Math.
**348**(9–10), 581–585 (2010)MathSciNetzbMATHGoogle Scholar - 9.Guermond, J.L., Minev, P., Shen, J.: An overview of projection methods for incompressible flows. Comput. Methods Appl. Mech. Eng.
**195**, 6011–6054 (2006)MathSciNetzbMATHGoogle Scholar - 10.Keating, J., Minev, P.: A fast algorithm for direct simulation of particulate flows using conforming grids. J. Comput. Phys.
**255**, 486–501 (2013)MathSciNetzbMATHGoogle Scholar - 11.Gurgul, G., Woźniak, M., Łoś, M., Szeliga, D., Paszyński, M.: Open source JAVA implementation of the parallel multi-thread alternating direction isogeometric L2 projections solver for material science simulations. Comput. Methods Mater. Sci.
**17**, 1–11 (2017)Google Scholar - 12.Łoś, M., Woźniak, M., Paszyński, M., Dalcin, L., Calo, V.M.: Dynamics with matrices possessing Kronecker product structure. Procedia Comput. Sci.
**51**, 286–295 (2015)Google Scholar - 13.Łoś, M., Paszyński, M., Kłusek, A., Dzwinel, W.: Application of fast isogeometric L2 projection solver for tumor growth simulations. Comput. Methods Appl. Mech. Eng.
**316**, 1257–1269 (2017)MathSciNetzbMATHGoogle Scholar - 14.Łoś, M., Woźniak, M., Paszyński, M., Lenharth, A., Pingali, K.: IGA-ADS: isogeometric analysis FEM using ADS solver. Comput. Phys. Commun.
**217**, 99–116 (2017)zbMATHGoogle Scholar - 15.Łoś, M., Paszyński, M.: Applications of alternating direction solver for simulations of time-dependent problems. Comput. Sci.
**18**(2), 117–128 (2017)MathSciNetGoogle Scholar - 16.Woźniak, M., Łoś, M., Paszyński, M., Dalcin, L., Calo, V.M.: Parallel fast isogeometric solvers for explicit dynamics. Comput. Inform.
**36**(2), 423–448 (2017)MathSciNetzbMATHGoogle Scholar - 17.Łoś, M., Kłusek, A., Hassam, M.A., Pingali, K., Dzwinel, W., Paszyński, M.: Parallel fast isogeometric L2 projection solver with GALOIS system for 3D tumor growth simulations. Comput. Methods Appl. Mech. Eng.
**343**, 1–22 (2019)MathSciNetzbMATHGoogle Scholar - 18.Oliver, A., Montero, G., Montenegro, R., Rodríguez, E., Escobar, J.M., Pérez-Foguet, A.: Adaptive finite element simulation of stack pollutant emissions over complex terrain. Energy
**49**, 47–60 (2013)Google Scholar - 19.Wachspress, E.L., Habetler, G.: An alternating-direction-implicit iteration technique. J. Soc. Ind. Appl. Math.
**8**, 403–423 (1960)MathSciNetzbMATHGoogle Scholar - 20.Łoś, M., Muñoz-Matute, J., Muga, I., Paszyński, M.: Isogeometric residual minimization method (iGRM) with direction splitting for non-stationary advection-diffusion problems. Comput. Math. Appl. (2019, in press)Google Scholar
- 21.Pingali, K., et al.: The tao of parallelism in algorithms. SIGPLAN Not.
**46**(6), 12–25 (2011)Google Scholar - 22.Hassaan, M.A., Burtscher, M., Pingali, K.: Ordered vs. unordered: a comparison of parallelism and work-efficiency in irregular algorithms. In: Proceedings of the 16th ACM Symposium on Principles and Practice of Parallel Programming, PPoPP 2011 (2011)Google Scholar
- 23.Lenharth, A., Nguyen, D., Pingali, K.: Priority queues are not good concurrent priority schedulers. In: Träff, J.L., Hunold, S., Versaci, F. (eds.) Euro-Par 2015. LNCS, vol. 9233, pp. 209–221. Springer, Heidelberg (2015). https://doi.org/10.1007/978-3-662-48096-0_17Google Scholar
- 24.Kulkarni, M., Pingali, K., Walter, B., Ramanarayanan, G., Bala, K., Chew, L.P.: Optimistic parallelism requires abstractions. ACM SIGPLAN Not.
**42**(6), 211–222 (2007)Google Scholar - 25.Demkowicz, L.: Babuśka \(<=>\) Brezzi, ICES-Report 0608, 2006, The University of Texas at Austin, USA. https://www.ices.utexas.edu/media/reports/2006/0608.pdf
- 26.Babuśka, I.: Error bounds for finite element method. Numer. Math.
**16**, 322–333 (1971)MathSciNetzbMATHGoogle Scholar - 27.Brezzi, F.: On the existence, uniqueness and approximation of saddle-point problems arising from Lagrange multiplier. ESAIM: Mathematical Modelling and Numerical Analysis - Modélisation Mathématique et Analyse Numérique 8.R2, pp. 129–151 (1974)Google Scholar
- 28.Hughes, T.J.R., Scovazzi, G., Tezduyar, T.E.: Stabilized methods for compressible flows. J. Sci. Comput.
**43**(3), 343–368 (2010)MathSciNetzbMATHGoogle Scholar - 29.Franca, L.P., Frey, S.L., Hughes, T.J.R.: Stabilized finite element methods: I. Application to the advective-diffusive model. Comput. Methods Appl. Mech. Eng.
**95**(2), 253–276 (1992)MathSciNetzbMATHGoogle Scholar - 30.Franca, L.P., Frey, S.L.: Stabilized finite element methods: II. The incompressible Navier-Stokes equations. Comput. Methods Appl. Mech. Eng.
**99**(2–3), 209–233 (1992)MathSciNetzbMATHGoogle Scholar - 31.Brezzi, F., Bristeau, M.-O., Franca, L.P., Mallet, M., Rogé, G.: A relationship between stabilized finite element methods and the Galerkin method with bubble functions. Comput. Methods Appl. Mech. Eng.
**96**(1), 117–129 (1992)MathSciNetzbMATHGoogle Scholar - 32.Demkowicz, L., Gopalakrishnan, J.: Recent developments in discontinuous Galerkin finite element methods for partial differential equations. In: Feng, X., Karakashian, O., Xing, Y. (eds.) An Overview of the DPG Method. IMA Volumes in Mathematics and its Applications, vol. 157, pp. 149–180 (2014)Google Scholar
- 33.Paszyński, M., Pardo, D., Calo, V.M.: Direct solvers performance on \(h\)-adapted grids. Comput. Math. Appl.
**70**(3), 282–295 (2015)MathSciNetGoogle Scholar - 34.Chan, J., Evans, J.A.: A Minimum-residual finite element method for the convection-diffusion equations. ICES-Report 13-12 (2013)Google Scholar
- 35.Broersen, D., Dahmen, W., Stevenson, R.P.: On the stability of DPG formulations of transport equations. Math. Comput.
**87**, 1051–1082 (2018)MathSciNetzbMATHGoogle Scholar - 36.Broersen, D., Stevenson, R.: A robust Petrov-Galerkin discretisation of convection-diffusion equations. Comput. Math. Appl.
**68**(11), 1605–1618 (2014)MathSciNetzbMATHGoogle Scholar