Programming for Computations - Python pp 287-309 | Cite as

# Solving Partial Differential Equations

## Abstract

We now turn to the solving of differential equations in which the solution is a function that depends on several independent variables. One such equation is called a *partial differential equation* (PDE, plural: PDEs).

We now turn to the solving of differential equations in which the solution is a function that depends on several independent variables. One such equation is called a *partial differential equation* (PDE, plural: PDEs).

The subject of PDEs is enormous. At the same time, it is very important, since so many phenomena in nature and technology find their mathematical formulation through such equations. Knowing how to solve at least some PDEs is therefore of great importance to engineers. In an introductory book like this, nowhere near full justice to the subject can be made. However, we still find it valuable to give the reader a glimpse of the topic by presenting a few basic and general methods that we will apply to a very common type of PDE.

The unknown in the diffusion equation is a function *u*(*x*, *t*) of space and time. The physical significance of *u* depends on what type of process that is described by the diffusion equation. For example, *u* is the concentration of a substance if the diffusion equation models transport of this substance by *diffusion*. Diffusion processes are of particular relevance at the microscopic level in biology, e.g., diffusive transport of certain ion types in a cell caused by molecular collisions. There is also diffusion of atoms in a solid, for instance, and diffusion of ink in a glass of water.

One very popular application of the diffusion equation is for heat transport in solid bodies. Then *u* is the temperature, and the equation predicts how the temperature evolves in space and time within the solid body. For such applications, the equation is known as the *heat equation*. We remark that the temperature in a fluid is influenced not only by diffusion, but also by the flow of the liquid. If present, the latter effect requires an extra term in the equation (known as an advection or convection term).

The term *g* is known as the *source term* and represents generation, or loss, of heat (by some mechanism) within the body. For diffusive transport, *g* models injection or extraction of the substance.

We should also mention that the diffusion equation may appear after simplifying more complicated PDEs. For example, flow of a viscous fluid between two flat and parallel plates is described by a one-dimensional diffusion equation, where *u* then is the fluid velocity.

A PDE is solved in some *domain**Ω* in space and for a time interval [0, *T*]. The solution of the equation is not unique unless we also prescribe *initial and boundary conditions*. The type and number of such conditions depend on the type of equation. For the diffusion equation, we need one initial condition, *u*(*x*, 0), stating what *u* is when the process starts. In addition, the diffusion equation needs one boundary condition at each point of the boundary *∂Ω* of *Ω*. This condition can either be that *u* is known or that we know the normal derivative, ∇*u* ⋅* n* =

*∂u*∕

*∂n*(

*denotes an outward unit normal to*

**n***∂Ω*).

## 9.1 Example: Temperature Development in a Rod

Let us look at a specific application and how the diffusion equation with initial and boundary conditions then appears. We consider the evolution of temperature in a one-dimensional medium, more precisely a long rod, where the surface of the rod is covered by an insulating material. The heat can then not escape from the surface, which means that the temperature distribution will only depend on a coordinate along the rod, *x*, and time *t*. At one end of the rod, *x* = *L*, we also assume that the surface is insulated, but at the other end, *x* = 0, we assume that we have some device for controlling the temperature of the medium. Here, a function *s*(*t*) tells what the temperature is in time. We therefore have a boundary condition *u*(0, *t*) = *s*(*t*). At the other insulated end, *x* = *L*, heat cannot escape, which is expressed by the boundary condition *∂u*(*L*, *t*)∕*∂x* = 0. The surface along the rod is also insulated and hence subject to the same boundary condition (here generalized to *∂u*∕*∂n* = 0 at the curved surface). However, since we have reduced the problem to one dimension, we do not need this physical boundary condition in our mathematical model. In one dimension, we can set *Ω* = [0, *L*].

*t*= 0, the initial condition (9.4) holds and that the PDE (9.1) comes into play for

*t*> 0. Similarly, at the end points, the boundary conditions (9.2) and (9.3) govern

*u*and the equation therefore is valid for

*x*∈ (0,

*L*).

### **Boundary and initial conditions are needed!**

The initial and boundary conditions are extremely important. Without them, the solution is not unique, and no numerical method will work. Unfortunately, many physical applications have one or more initial or boundary conditions as unknowns. Such situations can be dealt with if we have measurements of *u*, but the mathematical framework is much more complicated.

What about the source term *g* in our example with temperature distribution in a rod? *g*(*x*, *t*) models heat generation inside the rod. One could think of chemical reactions at a microscopic level in some materials as a reason to include *g*. However, in most applications with temperature evolution, *g* is zero and heat generation usually takes place at the boundary (as in our example with *u*(0, *t*) = *s*(*t*)).

### 9.1.1 A Particular Case

Before continuing, we may consider an example of how the temperature distribution evolves in the rod. At time *t* = 0, we assume that the temperature is 10 ^{∘}C. Then we suddenly apply a device at *x* = 0 that keeps the temperature at 50 ^{∘}C at this end. What happens inside the rod? Intuitively, you think that the heat generation at the end will warm up the material in the vicinity of *x* = 0, and as time goes by, more and more of the rod will be heated, before the entire rod has a temperature of 50 ^{∘}C (recall that no heat escapes from the surface of the rod).

Mathematically, (with the temperature in Kelvin) this example has *I*(*x*) = 283 K, except at the end point: *I*(0) = 323 K, *s*(*t*) = 323 K, and *g* = 0. The figure below shows snapshots from two different times in the evolution of the temperature.

## 9.2 Finite Difference Methods

We shall now construct a numerical method for the diffusion equation. We know how to solve ODEs, so in a way we are able to deal with the time derivative. Very often in mathematics, a new problem can be solved by reducing it to a series of problems we know how to solve. In the present case, it means that we must do something with the spatial derivative *∂*^{2}∕*∂x*^{2} in order to reduce the PDE to ODEs. One important technique for achieving this, is based on finite difference discretization of spatial derivatives.

### 9.2.1 Reduction of a PDE to a System of ODEs

*Ω*with

*mesh points*

*x*

_{i}and

*x*

_{i+1}, i.e. the interval [

*x*

_{i},

*x*

_{i+1}], is called a

*cell*. We shall here, for simplicity, assume that each cell has the same length

*Δx*=

*x*

_{i+1}−

*x*

_{i},

*i*= 0, …,

*N*− 1.

*x*∈

*Ω*, but we may relax this condition and demand that it is fulfilled at the internal mesh points only,

*x*

_{1}, …,

*x*

_{N−1}:

*x*

_{i}we can approximate the second-order derivative by a

*finite difference*:

*u*

_{i}(

*t*) for

*u*(

*x*

_{i},

*t*), i.e.,

*u*approximated at some mesh point

*x*

_{i}in space. With this new notation we can, after inserting (9.6) in (9.5), write an approximation to the PDE at mesh point (

*x*

_{i},

*t*) as

*g*

_{i}(

*t*) for

*g*(

*x*

_{i},

*t*) too.

What is (9.7)? This is nothing but a *system of ordinary differential equations* in *N* − 1 unknowns *u*_{1}(*t*), …, *u*_{N−1}(*t*)! In other words, with aid of the finite difference approximation (9.6), we have reduced the single PDE to a system of ODEs, which we know how to solve. In the literature, this strategy is called the *method of lines*.

We need to look into the initial and boundary conditions as well. The initial condition *u*(*x*, 0) = *I*(*x*) translates to an initial condition for every unknown function *u*_{i}(*t*): *u*_{i}(0) = *I*(*x*_{i}), *i* = 0, …, *N*. At the boundary *x* = 0 we need an ODE in our ODE system, which must come from the boundary condition at this point. The boundary condition reads *u*(0, *t*) = *s*(*t*). We can derive an ODE from this equation by differentiating both sides: \(u_0^{\prime }(t)=s^{\prime }(t)\). The ODE system above cannot be used for \(u_0^{\prime }\) since that equation involves some quantity \(u_{-1}^{\prime }\) outside the domain. Instead, we use the equation \(u_0^{\prime }(t)=s^{\prime }(t)\) derived from the boundary condition. For this particular equation we also need to make sure the initial condition is *u*_{0}(0) = *s*(0) (otherwise nothing will happen: we get *u* = 283 K forever).

We remark that a separate ODE for the (known) boundary condition *u*_{0} = *s*(*t*) is not strictly needed. We can just work with the ODE system for *u*_{1}, …, *u*_{N}, and in the ODE for *u*_{0}, replace *u*_{0}(*t*) by *s*(*t*). However, these authors prefer to have an ODE for every point value *u*_{i}, *i* = 0, …, *N*, which requires formulating the known boundary at *x* = 0 as an ODE. The reason for including the boundary values in the ODE system is that the solution of the system is then the complete solution at *all* mesh points, which is convenient, since special treatment of the boundary values is then avoided.

*∂u*∕

*∂x*= 0 at

*x*=

*L*is a bit more complicated, but we can approximate the spatial derivative by a centered finite difference:

*x*

_{N+1}outside the domain. A common trick is to use (9.7) for

*i*=

*N*and eliminate

*u*

_{N+1}by use of the discrete boundary condition (

*u*

_{N+1}=

*u*

_{N−1}):

*i*=

*N*.

### **What about simpler finite differences at the boundary?**

*∂u*∕

*∂x*at

*x*=

*L*by a one-sided difference:

*u*

_{N}=

*u*

_{N−1}for the boundary value, and a corresponding ODE \(u_N^{\prime }=u_{N-1}^{\prime }\). However, this approximation has an error of order

*Δx*, while the centered approximation we used above has an error of order

*Δx*

^{2}. The finite difference approximation we used for the second-order derivative in the diffusion equation also has an error of order

*Δx*

^{2}. Thus, if we use the simpler one-sided difference above, it turns out that we reduce the overall accuracy of the method.

### 9.2.2 Construction of a Test Problem with Known Discrete Solution

*u*(

*x*,

*t*) that are linear in time and in space can be exactly reproduced by most numerical methods for PDEs. A candidate solution might be

*u*in the governing equation gives

*∂u*∕

*∂x*= 3

*t*+ 2 for

*x*=

*L*, which breaks the assumption of

*∂u*∕

*∂x*= 0 at

*x*=

*L*in the formulation of the numerical method above. Moreover,

*u*(0,

*t*) = −

*L*(3

*t*+ 2), so we must set

*s*(

*t*) = −

*L*(3

*t*+ 2) and

*s*

^{′}(

*t*) = −3

*L*. Finally, the initial condition dictates

*I*(

*x*) = 2(

*x*−

*L*), but recall that we must have

*u*

_{0}=

*s*(0), and

*u*

_{i}=

*I*(

*x*

_{i}),

*i*= 1, …,

*N*: it is important that

*u*

_{0}starts out at the right value dictated by

*s*(

*t*) in case

*I*(0) is not equal this value.

*∂u*∕

*∂x*=

*γ*≠ 0 at

*x*=

*L*. We then have

### 9.2.3 Implementation: Forward Euler Method

In particular, we may use the Forward Euler method as implemented in the general function ode_FE in the module ode_system_FE from Sect. 8.3.6. The ode_FE function needs a specification of the right-hand side of the ODE system. This is a matter of translating (9.9), (9.10), and (9.14) to Python code (in file test_diffusion_pde_exact_linear.py):

Note that dudx( t) is the function representing the *γ* parameter in (9.14). Also note that the rhs function relies on access to global variables beta, dx, L, and x, and global functions dsdt, g, and dudx.

We expect the solution to be correct regardless of *N* and *Δt*, so we can choose a small *N*, *N* = 4, and *Δt* = 0.1. A test function with *N* = 4 goes like

With *N* = 4 we reproduce the linear solution exactly. This brings confidence to the implementation, which is just what we need for attacking a real physical problem next.

### **Problems with reusing the** **rhs** **function**

The rhs function *must* take u and t as arguments, because that is required by the ode_FE function. What about the variables beta, dx, L, x, dsdt, g, and dudx that the rhs function needs? These are global in the solution we have presented so far. Unfortunately, this has an undesired side effect: we cannot import the rhs function in a new file, define dudx and dsdt in this new file and get the imported rhs to use these functions. The imported rhs will use the global variables, including functions, in its own module.

How can we find solutions to this problem? Technically, we must pack the extra data beta, dx, L, x, dsdt, g, and dudx with the rhs function, which requires more advanced programming considered beyond the scope of this text.

A class is the simplest construction for packing a function together with data, see the beginning of Chapter 7 in [11] for a detailed example on how classes can be used in such a context. Another solution in Python, and especially in computer languages supporting *functional programming*, is so called *closures*. They are also covered in Chapter 7 in the mentioned reference and behave in a magic way. The third solution is to allow an arbitrary set of arguments for rhs in a list to be transferred to ode_FE and then back to rhs. Appendix H.4 in [11] explains the technical details.

### 9.2.4 Animation: Heat Conduction in a Rod

Let us return to the case with heat conduction in a rod (9.1)–(9.4). Assume that the rod is 50 cm long and made of aluminum alloy 6082. The *β* parameter equals *κ*∕(*ϱc*), where *κ* is the heat conduction coefficient, *ϱ* is the density, and *c* is the heat capacity. We can find proper values for these physical quantities in the case of aluminum alloy 6082: *ϱ* = 2.7 ⋅ 10^{3} kg/m^{3}, \(\kappa = 200\,\,\frac {\mbox{W}}{\mbox{mK}}\), \(c=900\,\,\frac {\mbox{J}}{\mbox{Kkg}}\). This results in *β* = *κ*∕(*ϱc*) = 8.2 ⋅ 10^{−5} m^{2}∕s. Preliminary simulations show that we are close to a constant steady state temperature after 1 h, i.e., *T* = 3600 s.

The rhs function from the previous section can be reused, only the functions s, dsdt, g, and dudx must be changed (see file rod_FE.py):

Parameters can be set as

Let us use *Δt* = 1.0. We can now call ode_FE and then make an animation on the screen to see how *u*(*x*, *t*) develops in time:

The plotting statements update the *u*(*x*, *t*) curve on the screen. In addition, we save a fraction of the plots to files tmp_0000.png, tmp_0001.png, tmp_0002.png, and so on. These plots can be combined to ordinary video files. A common tool is ffmpeg or its sister avconv.

^{1}

### An Unstable Solution

*unstable*, meaning that it does not display the same characteristics as the true, physical solution. Even though we tested the code carefully in the previous section, it does not seem to work for a physical application!

### Why Is the SolutionUnstable?

*Δt*is too large, making the solution

*unstable*. It turns out that the Forward Euler time integration method puts a restriction on the size of

*Δt*. For the heat equation and the way we have discretized it, this restriction can be shown to be [14]

*stability criterion*. With the chosen parameters, (9.15) tells us that the upper limit is

*Δt*= 0.9527439, which is smaller than our choice above. Rerunning the case with a

*Δt*equal to

*Δx*

^{2}∕(2

*β*), indeed shows a smooth evolution of

*u*(

*x*,

*t*). Find the program rod_FE.py and run it to see an animation of the

*u*(

*x*,

*t*) function on the screen.

### **Scaling and dimensionless quantities**

Our setting of parameters required finding three physical properties of a certain material. The time interval for simulation and the time step depend crucially on the values for *β* and *L*, which can vary significantly from case to case. Often, we are more interested in how the shape of *u*(*x*, *t*) develops, than in the actual *u*, *x*, and *t* values for a specific material. We can then simplify the setting of physical parameters by *scaling* the problem.

*u*

_{c}is a characteristic size of the temperature,

*u*

^{∗}is some reference temperature, while

*x*

_{c}and

*t*

_{c}are characteristic time and space scales. Here, it is natural to choose

*u*

^{∗}as the initial condition, and set

*u*

_{c}to the stationary (end) temperature. Then \(\bar u\in [0,1]\), starting at 0 and ending at 1 as

*t*→

*∞*. The length

*L*is

*x*

_{c}, while choosing

*t*

_{c}is more challenging, but one can argue for

*t*

_{c}=

*L*

^{2}∕

*β*. The resulting equation for \(\bar u\) reads

*no physical parameters*! In other words, we have found a model that is independent of the length of the rod and the material it is made of (!).

*β*= 1,

*L*= 1,

*I*(

*x*) = 0, and

*s*(

*t*) = 1. It turns out that the total simulation time (to “infinity”) can be taken as 1.2. When we have the solution \(\bar u(\bar x,\bar t)\), the solution with dimension Kelvin, reflecting the true temperature in our medium, is given by

*β*value.

The power of scaling is to reduce the number of physical parameters in a problem, and in the present case, we found one single problem that is independent of the material (*β*) and the geometry (*L*).

### 9.2.5 Vectorization

Occasionally in this book, we show how to speed up code by replacing loops over arrays by vectorized expressions. The present problem involves a loop for computing the right-hand side:

This loop can be replaced by a vectorized expression with the following reasoning. We want to set all the inner points at once: rhs[1:N-1] (this goes from index 1 up to, but not including, N). As the loop index i runs from 1 to N-1, the u[i+1] term will cover all the inner u values displaced one index to the right (compared to 1:N-1), i.e., u[2:N]. Similarly, u[i-1] corresponds to all inner u values displaced one index to the left: u[0:N-2]. Finally, u[i] has the same indices as rhs: u[1:N-1]. The vectorized loop can therefore be written in terms of slices:

This rewrite speeds up the code by about a factor of 10. A complete code is found in the file rod_FE_vec.py.

### 9.2.6 Using Odespy to Solve the System of ODEs

Let us now show how to apply a general ODE package like Odespy (see Sect. 8.4.6) to solve our diffusion problem. As long as we have defined a right-hand side function rhs this is very straightforward:

*Δt*corresponding to the Forward Euler method requires over 8000 steps for a simulation, we started the RKFehlberg method with 100 times this time step and in the end it required just slightly more than 2500 steps, using the default tolerance parameters. Lowering the tolerance did not save any significant amount of computational work. Figure 9.2 shows a comparison of the length of all the time steps for two values of the tolerance. We see that the influence of the tolerance is minor in this computational example, so it seems that the blow-up due to instability is what governs the time step size. The nice feature of this adaptive method is that we can just specify when we want the solution to be computed, and the method figures out on its own what time step that has to be used because of stability restrictions.

We have seen how easy it is to apply sophisticated methods for ODEs to this PDE example. We shall take the use of Odespy one step further in the next section.

### 9.2.7 Implicit Methods

A major problem with the stability criterion (9.15) is that the time step becomes very small if *Δx* is small. For example, halving *Δx* requires four times as many time steps and eight times the work. Now, with *N* = 40, which is a reasonable resolution for the test problem above, the computations are very fast. What takes time, is the visualization on the screen, but for that purpose one can visualize only a subset of the time steps. However, there are occasions when you need to take larger time steps with the diffusion equation, especially if interest is in the long-term behavior as *t* →*∞*. You must then turn to *implicit methods* for ODEs. These methods require the solutions of *linear systems*, if the underlying PDE is linear, and systems of *nonlinear algebraic equations* if the underlying PDE is non-linear.

*Δt*for stability, but obviously, a large

*Δt*leads to inaccurate results. The Backward Euler scheme for a scalar ODE

*u*

^{′}=

*f*(

*u*,

*t*) reads

*u*

^{n+1}. If

*f*is linear in

*u*, it is a linear equation, but if

*f*is nonlinear in

*u*, one needs approximate methods for nonlinear equations (Chap. 7).

*i*= 0, …,

*N*, which is easy to realize by writing out the equations for the case

*N*= 3, collecting all the unknown terms on the left-hand side and all the known terms on the right-hand side:

*Au*=

*b*, where

*A*is a coefficient matrix, \(u=(u_0^{n+1},\ldots ,n_N^{n+1})\) is the vector of unknowns, and

*b*is a vector of known values. The coefficient matrix for the case (9.19)–(9.21) becomes

*N*+ 1) × (

*N*+ 1) matrix with zero entries, except for

*u*

^{′}=

*f*(

*u*,

*t*), we can assume a linear

*f*(

*u*,

*t*) =

*Ku*. The coefficient matrix

*K*is found from the right-hand side of (9.16)–(9.18) to be

*A*=

*I*−

*Δt*

*K*.

To implement the Backward Euler scheme, we can either fill a matrix and call a linear solver, or we can apply Odespy. We follow the latter strategy. Implicit methods in Odespy need the *K* matrix above, given as an argument jac (Jacobian of *f*) in the call to odespy.BackwardEuler. Here is the Python code for the right-hand side of the ODE system (rhs) and the *K* matrix (K) as well as statements for initializing and running the Odespy solver BackwardEuler (in the file rod_BE.py):

The file rod_BE.py has all the details and shows a movie of the solution. We can run it with any *Δt* we want, its size just impacts the accuracy of the first steps.

### **Odespy solvers apply dense matrices!**

Looking at the entries of the *K* matrix, we realize that there are at maximum three entries different from zero in each row. Therefore, most of the entries are zeroes. The Odespy solvers expect dense square matrices as input, here with (*N* + 1) × (*N* + 1) elements. When solving the linear systems, a lot of storage and work are spent on the zero entries in the matrix. It would be much more efficient to store the matrix as a *tridiagonal* matrix and apply a specialized Gaussian elimination solver for tridiagonal systems. Actually, this reduces the work from the order *N*^{3} to the order *N*.

In one-dimensional diffusion problems, the savings of using a tridiagonal matrix are modest in practice, since the matrices are very small anyway. In two- and three-dimensional PDE problems, however, one cannot afford dense square matrices. Rather, one *must* resort to more efficient storage formats and algorithms tailored to such formats, but this is beyond the scope of the present text.

## 9.3 Exercises

### Exercise 9.1: Simulate a Diffusion Equation by Hand

Consider the problem given by (9.9), (9.10) and (9.14). Set *N* = 2 and compute \(u_i^0\), \(u_i^1\) and \(u_i^2\) by hand for *i* = 0, 1, 2. Use these values to construct a test function for checking that the implementation is correct. Copy useful functions from test_diffusion_pde_exact_linear.py and make a new test function test_diffusion_hand_calculation.

Filename: test_rod_hand_calculations.py.

### Exercise 9.2: Compute Temperature Variations in the Ground

The surface temperature at the ground shows daily and seasonal oscillations. When the temperature rises at the surface, heat is propagated into the ground, and the coefficient *β* in the diffusion equation determines how fast this propagation is. It takes some time before the temperature rises down in the ground. At the surface, the temperature has then fallen. We are interested in how the temperature varies down in the ground because of temperature oscillations on the surface.

Assuming homogeneous horizontal properties of the ground, at least locally, and no variations of the temperature at the surface at a fixed point of time, we can neglect the horizontal variations of the temperature. Then a one-dimensional diffusion equation governs the heat propagation along a vertical axis called *x*. The surface corresponds to *x* = 0 and the *x* axis point downwards into the ground. There is no source term in the equation (actually, if rocks in the ground are radioactive, they emit heat and that can be modeled by a source term, but this effect is neglected here).

*x*=

*L*we assume that the heat changes in

*x*vanish, so

*∂u*∕

*∂x*= 0 is an appropriate boundary condition at

*x*=

*L*. We assume a simple sinusoidal temperature variation at the surface:

*P*is the period, taken here as 24 h (24 ⋅ 60 ⋅ 60 s). The

*β*coefficient may be set to 10

^{−6}m

^{2}∕s. Time is then measured in seconds. Set appropriate values for

*T*

_{0}and

*T*

_{a}.

- a)
Show that the present problem has an analytical solution of the form

$$\displaystyle \begin{aligned} u(x,t) = A + Be^{-rx}\sin{}(\omega t - rx),\end{aligned}$$for appropriate values of

*A*,*B*,*r*, and*ω*. - b)
Solve this heat propagation problem numerically for some days and animate the temperature. You may use the Forward Euler method in time. Plot both the numerical and analytical solution. As initial condition for the numerical solution, use the exact solution during program development, and when the curves coincide in the animation for all times, your implementation works, and you can then switch to a constant initial condition:

*u*(*x*, 0) =*T*_{0}. For this latter initial condition, how many periods of oscillations are necessary before there is a good (visual) match between the numerical and exact solution (despite differences at*t*= 0)?

### Exercise 9.3: Compare Implicit Methods

*u*

^{′}=

*f*(

*u*,

*t*) can be expressed by

- 1.
The Backward Euler method with

*Δt*= 0.001 - 2.
The backward 2-step method with

*Δt*= 0.001 - 3.
The backward 2-step method with

*Δt*= 0.01

Filename: rod_BE_vs_B2Step.py.

### Exercise 9.4: Explore Adaptive and Implicit Methods

*Δt*value. Since there exists an analytical solution, you can compute an error measure that summarizes the error in space and time over the whole simulation:

- 1.
BackwardEuler

- 2.
Backward2Step

- 3.
RKFehlberg

**Hint**

To avoid oscillations in the solutions when using the RKFehlberg method, the rtol and atol parameters to RKFFehlberg must be set no larger than 0.001 and 0.0001, respectively. You can print out solver_RKF.t_all to see all the time steps used by the RKFehlberg solver (if solver is the RKFehlberg object). You can then compare the number of time steps with what is required by the other methods.

Filename: ground_temp_adaptive.py.

### Exercise 9.5: Investigate the *θ* Rule

- a)The Crank-Nicolson method for ODEs is very popular when combined with diffusion equations. For a linear ODE
*u*^{′}=*au*it reads$$\displaystyle \begin{aligned} \frac{u^{n+1}-u^n}{\varDelta t} = \frac{1}{2}(au^{n} + au^{n+1}) \, . \end{aligned}$$Apply the Crank-Nicolson method in time to the ODE system for a one-dimensional diffusion equation. Identify the linear system to be solved.

- b)The Backward Euler, Forward Euler, and Crank-Nicolson methods can be given a unified implementation. For a linear ODE
*u*^{′}=*au*this formulation is known as the*θ*rule:$$\displaystyle \begin{aligned} \frac{u^{n+1}-u^n}{\varDelta t} = (1-\theta)au^{n} + \theta au^{n+1} \, . \end{aligned} $$For

*θ*= 0 we recover the Forward Euler method,*θ*= 1 gives the Backward Euler scheme, and*θ*= 1∕2 corresponds to the Crank-Nicolson method. The approximation error in the*θ*rule is proportional to*Δt*, except for*θ*= 1∕2 where it is proportional to*Δt*^{2}. For*θ*≥ 1∕2 the method is stable for all*Δt*.Apply the

*θ*rule to the ODE system for a one-dimensional diffusion equation. Identify the linear system to be solved. - c)
Implement the

*θ*rule with aid of the Odespy package. The relevant object name is ThetaRule:

- d)
Consider the physical application from Sect. 9.2.4. Run this case with the

*θ*rule and*θ*= 1∕2 for the following values of*Δt*: 0.001, 0.01, 0.05. Report what you see.

*Remarks*

Despite the fact that the Crank-Nicolson method, or the *θ* rule with *θ* = 1∕2, is theoretically more accurate than the Backward Euler and Forward Euler schemes, it may exhibit non-physical oscillations as in the present example if the solution is very steep. The oscillations are damped in time, and decreases with decreasing *Δt*. To avoid oscillations one must have *Δt* at maximum twice the stability limit of the Forward Euler method. This is one reason why the Backward Euler method (or a 2-step backward scheme, see Exercise 9.3) are popular for diffusion equations with abrupt initial conditions.

### Exercise 9.6: Compute the Diffusion of a Gaussian Peak

The initial condition is the famous and widely used *Gaussian function* with standard deviation (or “width”) *σ*, which is here taken to be small, *σ* = 0.01, such that the initial condition is a peak. This peak will then diffuse and become lower and wider. Compute *u*(*x*, *t*) until *u* becomes approximately constant over the domain.

Filename: gaussian_diffusion.py.

*Remarks*

*σ*= 0.2 results in a constant solution

*u*≈ 1 as

*t*→

*∞*, while one might expect from “physics of diffusion” that the solution should approach zero. The reason is that we apply Neumann conditions as boundary conditions. One can then easily show that the area under the

*u*curve remains constant. Integrating the PDE gives

*∂u*∕

*∂x*= 0 at the end points.) The result means that \(\int _{-1}^1 udx\) remains constant during the simulation. Giving the PDE an interpretation in terms of heat conduction can easily explain the result: with Neumann conditions no heat can escape from the domain so the initial heat will just be evenly distributed, but not leak out, so the temperature cannot go to zero (or the scaled and translated temperature

*u*, to be precise). The area under the initial condition is 1, so with a sufficiently fine mesh,

*u*→ 1, regardless of

*σ*.

### Exercise 9.7: Vectorize a Function for Computing the Area of a Polygon

Vectorize the implementation of the function for computing the area of a polygon in Exercise 5.6. Make a test function that compares the scalar implementation in Exercise 5.6 and the new vectorized implementation for the test cases used in Exercise 5.6.

**Hint**

Notice that the formula \(x_1y_2+x_2y_3 + \cdots + x_{n-1}y_n \,{=}\, \sum _{i=0}^{n-1}x_iy_{i+1}\) is the dot product of two vectors, x[:-1] and y[1:], which can be computed as numpy.dot( x[:-1], y[1:]) , or more explicitly as numpy.sum( x[:-1]*y[1:]) .

Filename: polyarea_vec.py.

### Exercise 9.8: Explore Symmetry

One can observe (and also mathematically prove) that the solution *u*(*x*, *t*) of the problem in Exercise 9.6 is symmetric around *x* = 0: *u*(−*x*, *t*) = *u*(*x*, *t*). In such a case, we can split the domain in two and compute *u* in only one half, [−1, 0] or [0, 1]. At the symmetry line *x* = 0 we have the symmetry boundary condition *∂u*∕*∂x* = 0. Reformulate the problem in Exercise 9.6 such that we compute only for *x* ∈ [0, 1]. Display the solution and observe that it equals the right part of the solution in Exercise 9.6.

Filename: symmetric_gaussian_diffusion.py.

*Remarks*

In 2D and 3D problems, where the CPU time to compute a solution of PDE can be hours and days, it is very important to utilize symmetry as we do above to reduce the size of the problem.

Also note the remarks in Exercise 9.6 about the constant area under the *u*(*x*, *t*) curve: here, the area is 0.5 and *u* → 0.5 as *t* → 0.5 (if the mesh is sufficiently fine—one will get convergence to smaller values for small *σ* if the mesh is not fine enough to properly resolve a thin-shaped initial condition).

### Exercise 9.9: Compute Solutions as *t* →*∞*

*t*→

*∞*. The model problem from Sect. 9.2.4 is one example where

*u*(

*x*,

*t*) =

*s*(

*t*) = const for

*t*→

*∞*. When

*u*does not depend on time, the diffusion equation reduces to

*Poisson*equation, or if

*f*= 0, it is known as the

*Laplace*equation. In this limit

*t*→

*∞*, there is no need for an initial condition, but the boundary conditions are the same as for the diffusion equation.

*two-point boundary value problem*. This is nothing but the stationary limit of the diffusion problem in Sect. 9.2.4. How can we solve such a stationary problem (9.38)? The simplest strategy, when we already have a solver for the corresponding time-dependent problem, is to use that solver and simulate until

*t*→

*∞*, which in practice means that

*u*(

*x*,

*t*) no longer changes in time (within some tolerance).

A nice feature of implicit methods like the Backward Euler scheme is that one can take *one very long time step* to “infinity” and produce the solution of (9.38).

- a)
Let (9.38) be valid at mesh points

*x*_{i}in space, discretize*u*^{′′}by a finite difference, and set up a system of equations for the point values*u*_{i},*i*= 0, …,*N*, where*u*_{i}is the approximation at mesh point*x*_{i}. - b)
- c)
Demonstrate, by running a program, that you can take one large time step with the Backward Euler scheme and compute the solution of (9.38). The solution is very boring since it is constant:

*u*(*x*) =*C*.

*Remarks*

If the interest is in the stationary limit of a diffusion equation, one can either solve the associated Laplace or Poisson equation directly, or use a Backward Euler scheme for the time-dependent diffusion equation with a very long time step. Using a Forward Euler scheme with small time steps is typically inappropriate in such situations because the solution changes more and more slowly, but the time step must still be kept small, and it takes “forever” to approach the stationary state. This is yet another example why one needs implicit methods like the Backward Euler scheme.

### Exercise 9.10: Solve a Two-Point Boundary Value Problem

**Hint**

Do Exercise 9.9. Modify the boundary condition in the code so it incorporates a known value for *u*(1).

Filename: 2ptBVP.py.

## Footnotes

- 1.
You may read about using a

*terminal*in Appendix A.

## References

- 11.H.P. Langtangen,
*A Primer on Scientific Programming with Python*, 5th edn. Texts in Computational Science and Engineering (Springer, Berlin, 2016)Google Scholar - 14.R. LeVeque,
*Finite Difference Methods for Ordinary and Partial Differential Equations: Steady-State and Time-Dependent Problems*(SIAM, Philadelphia, 2007)CrossRefGoogle Scholar

## Copyright information

**Open Access** This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.