Programming for Computations  Python pp 175201  Cite as
Solving Nonlinear Algebraic Equations
Abstract
As a reader of this book, you might be well into mathematics and often “accused” of being particularly good at solving equations (a typical comment at family dinners!). How true is it, however, that you can solve many types of equations with pen and paper alone? Restricting our attention to algebraic equations in one unknownx, you can certainly do linear equations: ax + b = 0, and quadratic ones: ax^{2} + bx + c = 0. You may also know that there are formulas for the roots of cubic and quartic equations too. Maybe you can do the special trigonometric equation \(\sin x + \cos x = 1\) as well, but there it (probably?) stops. Equations that are not reducible to one of those mentioned, cannot be solved by general analytical techniques, which means that most algebraic equations arising in applications cannot be treated with pen and paper!
As a reader of this book, you might be well into mathematics and often “accused” of being particularly good at solving equations (a typical comment at family dinners!). How true is it, however, that you can solve many types of equations with pen and paper alone? Restricting our attention to algebraic equations in one unknownx, you can certainly do linear equations: ax + b = 0, and quadratic ones: ax^{2} + bx + c = 0. You may also know that there are formulas for the roots of cubic and quartic equations too. Maybe you can do the special trigonometric equation \(\sin x + \cos x = 1\) as well, but there it (probably?) stops. Equations that are not reducible to one of those mentioned, cannot be solved by general analytical techniques, which means that most algebraic equations arising in applications cannot be treated with pen and paper!
If we exchange the traditional idea of finding exact solutions to equations with the idea of rather finding approximate solutions, a whole new world of possibilities opens up. With such an approach, we can in principle solve any algebraic equation.
So, when do we really need to solve algebraic equations beyond the simplest types we can treat with pen and paper? There are two major application areas. One is when using implicit numerical methods for ordinary differential equations. These give rise to one or a system of algebraic equations. The other major application type is optimization, i.e., finding the maxima or minima of a function. These maxima and minima are normally found by solving the algebraic equation F′(x) = 0 if F(x) is the function to be optimized. Differential equations are very much used throughout science and engineering, and actually most engineering problems are optimization problems in the end, because one wants a design that maximizes performance and minimizes cost.
We first consider one algebraic equation in one variable, for which we present some fundamental solution algorithms that any reader should get to know. Our focus will, as usual, be placed on the programming of the algorithms. Systems of nonlinear algebraic equations with many variables arise from implicit methods for ordinary and partial differential equations as well as in multivariate optimization. Our attention will be restricted to Newton’s method for such systems of nonlinear algebraic equations.
Root Finding
When solving algebraic equations f(x) = 0, we often say that the solution x is a root of the equation. The solution process itself is thus often called root finding.
7.1 Brute Force Methods
The representation of a mathematical function f(x) on a computer takes two forms. One is a Python function returning the function value given the argument, while the other is a collection of points (x, f(x)) along the function curve. The latter is the representation we use for plotting, together with an assumption of linear variation between the points. This representation is also very well suited for equation solving: we simply go through all points and see if the function crosses the x axis, or for optimization: we test for local maximum or minimum points. Because there is a lot of work to examine a huge number of points, and also because the idea is extremely simple, such approaches are often referred to as brute force methods.
7.1.1 Brute Force Root Finding
We want to solve f(x) = 0, i.e., find the points x where f crosses the x axis. A brute force algorithm is to run through all points on the curve and check if one point is below the x axis and if the next point is above the x axis, or the other way around. If this is found to be the case, we know that, when f is continuous, it has to cross the x axis at least once between these two x values. In other words, f is zero at least once on that subinterval.
Note that, in the following algorithm, we refer to “the” root on a subinterval, even if there may be more than one root in principle. Whether there are more than one root on a subinterval will of course depend on the function, as well as on the size and location of the subinterval. For simplicity, we will just assume there is at most one root on a subinterval (or that it is sufficiently precise to talk about one root, even if there could be more).
Numerical Algorithm
More precisely, we have a set of n + 1 points (x_{i}, y_{i}), y_{i} = f(x_{i}), i = 0, …, n, where x_{0} < … < x_{n}. We check if y_{i} < 0 and y_{i+1} > 0 (or the other way around). A compact expression for this check is to perform the test y_{i}y_{i+1} < 0. If so, the root of f(x) = 0 is in [x_{i}, x_{i+1}].
Implementation
Given some Python implementation f( x) of our mathematical function, a straightforward implementation of the above algorithm that quits after finding one root, looks like
(See the file brute_force_root_finder_flat.py.)
Note the nice use of setting root to None: we can simply test if root is None to see if we found a root and overwrote the None value, or if we did not find any root among the tested points.
Running this program with some function, say \(f(x)=e^{x^2}\cos {}(4x)\) (which has a solution at \(x = \frac {\pi }{8}\)), gives the root 0.39269910538048097, which has an error of 2.4 × 10^{−8}. Increasing the number of points with a factor of ten gives a root with an error of 2.9 × 10^{−10}.
After such a quick “flat” implementation of an algorithm, we should always try to offer the algorithm as a Python function, applicable to as wide a problem domain as possible. The function should take f and an associated interval [a, b] as input, as well as a number of points (n), and return a list of all the roots in [a, b]. Here is our candidate for a good implementation of the brute force root finding algorithm:
(See the file brute_force_root_finder_function.py.)
This time we use another elegant technique to indicate if roots were found or not: roots is an empty list if the root finding was unsuccessful, otherwise it contains all the roots. Application of the function to the previous example can be coded as
Note that if roots evaluates to True if roots is nonempty. This is a general test in Python: if X evaluates to True if X is nonempty or has a nonzero value.
Running the program gives the output
7.1.2 Brute Force Optimization
Numerical Algorithm
We realize that x_{i} corresponds to a maximum point if y_{i−1} < y_{i} > y_{i+1}. Similarly, x_{i} corresponds to a minimum if y_{i−1} > y_{i} < y_{i+1}. We can do this test for all “inner” points i = 1, …, n − 1 to find all local minima and maxima. In addition, we need to add an end point, i = 0 or i = n, if the corresponding y_{i} is a global maximum or minimum.
Implementation
The algorithm above can be translated to the following Python function (file brute_force_optimizer.py):
The max and min functions are standard Python functions for finding the maximum and minimum element of a list or an object that one can iterate over with a for loop.
An application to \(f(x)=e^{x^2}\cos {}(4x)\) looks like
Running the program gives
7.1.3 Model Problem for Algebraic Equations
In the following, we will present several efficient and accurate methods for solving nonlinear algebraic equations, both single equation and systems of equations. The methods all have in common that they search for approximate solutions. The methods differ, however, in the way they perform the search for solutions. The idea for the search influences the efficiency of the search and the reliability of actually finding a solution. For example, Newton’s method is very fast, but not reliable, while the bisection method is the slowest, but absolutely reliable. No method is best at all problems, so we need different methods for different problems.
What Is the Difference Between Linear and Nonlinear Equations?
You know how to solve linear equations ax + b = 0: x = −b∕a. All other types of equations f(x) = 0, i.e., when f(x) is not a linear function of x, are called nonlinear. A typical way of recognizing a nonlinear equation is to observe that x is “not alone” as in ax, but involved in a product with itself, such as in x^{3} + 2x^{2} − 9 = 0. We say that x^{3} and 2x^{2} are nonlinear terms. An equation like \(\sin x + e^x\cos x=0\) is also nonlinear although x is not explicitly multiplied by itself, but the Taylor series of \(\sin x\), e^{x}, and \(\cos x\) all involve polynomials of x where x is multiplied by itself.
7.2 Newton’s Method
Newton’s method, also known as NewtonRaphson’s method, is a very famous and widely used method for solving nonlinear algebraic equations.^{1} Compared to the other methods presented in this chapter, i.e., secant and bisection, it is generally the fastest one (although computational speed rarely is an issue with a single equation on modern laptops). However, it does not guarantee that an existing solution will be found.
A fundamental idea of numerical methods for nonlinear equations is to construct a series of linear equations (since we know how to solve linear equations) and hope that the solutions of these linear equations bring us closer and closer to the solution of the nonlinear equation. The idea will be clearer when we present Newton’s method and the secant method.
7.2.1 Deriving and Implementing Newton’s Method
 1.
the slope equals to f′(x_{0})
 2.
the tangent touches the f(x) curve at x_{0}
We moved from 1000 to 250 in two iterations, so it is exciting to see how fast we can approach the solution x = 3. A computer program can automate the calculations. Our first try at implementing Newton’s method is in a function naive_Newton (found in naive_Newton.py):
The argument x is the starting value, called x_{0} in our previous mathematical description.
To solve the problem x^{2} = 9 we also need to implement
which in naive_Newton.py is included by use of an extra function and a test block.
Why Not Use an Array for the x Approximations?
Such an array is fine, but requires storage of all the approximations. In large industrial applications, where Newton’s method solves millions of equations at once, one cannot afford to store all the intermediate approximations in memory, so then it is important to understand that the algorithm in Newton’s method has no more need for x_{n} when x_{n+1} is computed. Therefore, we can work with one variable x and overwrite the previous value:
Running naive_Newton(f, dfdx, 1000, eps=0.001) results in the approximate solution 3.000027639. A smaller value of eps will produce a more accurate solution. Unfortunately, the plain naive_Newton function does not return how many iterations it used, nor does it print out all the approximations x_{0}, x_{1}, x_{2}, …, which would indeed be a nice feature. If we insert such a printout (print( x) in the while loop), a rerun results in
We clearly see that the iterations approach the solution quickly. This speed of the search for the solution is the primary strength of Newton’s method compared to other methods.
7.2.2 Making a More Efficient and Robust Implementation
The naive_Newton function works fine for the example we are considering here. However, for more general use, there are some pitfalls that should be fixed in an improved version of the code. An example may illustrate what the problem is.
Let us use naive_Newton to solve \(\tanh (x)=0\), which has solution x = 0 (interactively, you may define f(x) = tanh(x) and f′(x) = 1 − tanh^{2}(x) as Python functions and import the naive_Newton function from naive_Newton.py). With x_{0}≤ 1.08 everything works fine. For example, x_{0} = 1.08 leads to six iterations if 𝜖 = 0.001:
Adjusting x_{0} slightly to 1.09 gives division by zero! The approximations computed by Newton’s method become
The division by zero is caused by x_{7} = −1.26055913647 × 10^{11}, because \(\tanh (x_7)\) is 1.0 to machine precision, and then \(f'(x)=1  \tanh (x)^2\) becomes zero in the denominator in Newton’s method.
The underlying problem, leading to the division by zero in the above example, is that Newton’s method diverges: the approximations move further and further away from x = 0. If it had not been for the division by zero, the condition in the while loop would always be true and the loop would run forever. Divergence of Newton’s method occasionally happens, and the remedy is to abort the method when a maximum number of iterations is reached.
Another disadvantage of the naive_Newton function is that it calls the f(x) function twice as many times as necessary. This extra work is of no concern when f(x) is fast to evaluate, but in largescale industrial software, one call to f(x) might take hours or days, and then removing unnecessary calls is important. The solution in our function is to store the call f( x) in a variable (f_value) and reuse the value instead of making a new call f( x) .

handle division by zero properly

allow a maximum number of iterations

avoid the extra evaluation of f(x)
Handling of the potential division by zero is done by a tryexcept construction.^{3}
The division by zero will always be detected and the program will be stopped. The main purpose of our way of treating the division by zero is to give the user a more informative error message and stop the program in a gentler way.
Calling sys.exit with an argument different from zero (here 1) signifies that the program stopped because of an error. It is a good habit to supply the value 1, because tools in the operating system can then be used by other programs to detect that our program failed.
To prevent an infinite loop because of divergent iterations, we have introduced the integer variable iteration_counter to count the number of iterations in Newton’s method. With iteration_counter we can easily extend the condition in the while loop such that no more iterations take place when the number of iterations reaches 100. We could easily let this limit be an argument to the function rather than a fixed constant.
The Newton function returns the approximate solution and the number of iterations. The latter equals − 1 if the convergence criterion f(x) < 𝜖 was not reached within the maximum number of iterations. In the calling code, we print out the solution and the number of function calls. The main cost of a method for solving f(x) = 0 equations is usually the evaluation of f(x) and f′(x), so the total number of calls to these functions is an interesting measure of the computational work. Note that in function Newton there is an initial call to f(x) and then one call to f and one to f′ in each iteration.
Running Newtons_method.py, we get the following printout on the screen:
The Newton scheme will work better if the starting value is close to the solution. A good starting value may often make the difference as to whether the code actually finds a solution or not. Because of its speed (and when speed matters), Newton’s method is often the method of first choice for solving nonlinear algebraic equations, even if the scheme is not guaranteed to work. In cases where the initial guess may be far from the solution, a good strategy is to run a few iterations with the bisection method (see Sect. 7.4) to narrow down the region where f is close to zero and then switch to Newton’s method for fast convergence to the solution.
Using sympy to Find the Derivative
Newton’s method requires the analytical expression for the derivative f′(x). Derivation of f′(x) is not always a reliable process by hand if f(x) is a complicated function. However, Python has the symbolic package SymPy, which we may use to create the required dfdx function. With our sample problem, we get:
The nice feature of this code snippet is that dfdx_expr is the exact analytical expression for the derivative, 2*x (seen if you print it out). This is a symbolic expression, so we cannot do numerical computing with it. However, with lambdify, such symbolic expression are turned into callable Python functions, as seen here with f and dfdx.
The next method is the secant method, which is usually slower than Newton’s method, but it does not require an expression for f′(x), and it has only one function call per iteration.
7.3 The Secant Method
We can store the approximations x_{n} in an array, but as in Newton’s method, we notice that the computation of x_{n+1} only needs knowledge of x_{n} and x_{n−1}, not “older” approximations. Therefore, we can make use of only three variables: x for x_{n+1}, x1 for x_{n}, and x0 for x_{n−1}. Note that x0 and x1 must be given (guessed) for the algorithm to start.
A program secant_method.py that solves our example problem may be written as:
The number of function calls is now related to no_iterations, i.e., the number of iterations, as 2 + no_iterations, since we need two function calls before entering the while loop, and then one function call per loop iteration. Note that, even though we need two points on the graph to compute each updated estimate, only a single function call (f( x1) ) is required in each iteration since f( x0) becomes the “old” f( x1) and may simply be copied as f_x0 = f_x1 (the exception is the very first iteration where two function evaluations are needed).
Running secant_method.py, gives the following printout on the screen:
7.4 The Bisection Method
Neither Newton’s method nor the secant method can guarantee that an existing solution will be found (see Exercises 7.1 and 7.2). The bisection method, however, does that. However, if there are several solutions present, it finds only one of them, just as Newton’s method and the secant method. The bisection method is slower than the other two methods, so reliability comes with a cost of speed (but, again, for a single equation that is rarely an issue with laptops of today).
To solve x^{2} − 9 = 0, \(x \in \left [0, 1000\right ]\), with the bisection method, we reason as follows. The first key idea is that if f(x) = x^{2} − 9 is continuous on the interval and the function values for the interval endpoints (x_{L} = 0, x_{R} = 1000) have opposite signs, f(x) must cross the x axis at least once on the interval. That is, we know there is at least one solution.
The second key idea comes from dividing the interval in two equal parts, one to the left and one to the right of the midpoint x_{M} = 500. By evaluating the sign of f(x_{M}), we will immediately know whether a solution must exist to the left or right of x_{M}. This is so, since if f(x_{M}) ≥ 0, we know that f(x) has to cross the x axis between x_{L} and x_{M} at least once (using the same argument as for the original interval). Likewise, if instead f(x_{M}) ≤ 0, we know that f(x) has to cross the x axis between x_{M} and x_{R} at least once.
In any case, we may proceed with half the interval only. The exception is if f(x_{M}) ≈ 0, in which case a solution is found. Such interval halving can be continued until a solution is found. A “solution” in this case, is when f(x_{M}) is sufficiently close to zero, more precisely (as before): f(x_{M}) < 𝜖, where 𝜖 is a small number specified by the user.
The sketched strategy seems reasonable, so let us write a reusable function that can solve a general algebraic equation f(x) = 0 (bisection_method.py):
Note that we first check if f changes sign in [a, b], because that is a requirement for the algorithm to work. The algorithm also relies on a continuous f(x) function, but this is very challenging for a computer code to check.
We get the following printout to the screen when bisection_method.py is run:
We notice that the number of function calls is much higher than with the previous methods.
Required Work in the Bisection Method
7.5 Rate of Convergence
Convergence Rate and Iterations
When we previously addressed numerical integration (Chap. 6), the approximation error E was related to the size h of the subintervals and the convergence rate r as E = Kh^{r}, K being some constant.
Observe that (7.5) gives a different definition of convergence rate. This makes sense, since numerical integration is based on a partitioning of the original integration interval into n subintervals, which is very different from the iterative procedures used here for solving nonlinear algebraic equations.
Modifying Our Functions to Return All Approximations
To compute all the q_{n} values, we need all the x_{n} approximations. However, our previous implementations of Newton’s method, the secant method, and the bisection method returned just the final approximation.
Therefore, we have modified our solvers^{5} accordingly, and placed them in nonlinear_solvers.py. A user can choose whether the final value or the whole history of solutions is to be returned. Each of the extended implementations now takes an extra parameter return_x_list. This parameter is a boolean, set to True if the function is supposed to return all the root approximations, or False, if the function should only return the final approximation.
As an example, let us take a closer look at Newton:
We can now make a call
and get a list x returned. With knowledge of the exact solution x of f(x) = 0 we can compute all the errors e_{n} and all the associated q_{n} values with the compact function (also found in nonlinear_solvers.py)
The error model (7.5) works well for Newton’s method and the secant method. For the bisection method, however, it works well in the beginning, but not when the solution is approached.
We can compute the rates q_{n} and print them nicely (print_rates.py),
The result for print_rates('Newton', x, 3) is
indicating that q = 2 is the rate for Newton’s method. A similar computation using the secant method, gives the rates
Here it seems that q ≈ 1.6 is the limit.
Remark
If we in the bisection method think of the length of the current interval containing the solution as the error e_{n}, then (7.5) works perfectly since \(e_{n+1}=\frac {1}{2}e_n\), i.e., q = 1 and \(C=\frac {1}{2}\), but if e_{n} is the true error x − x_{n}, it is easily seen from a sketch that this error can oscillate between the current interval length and a potentially very small value as we approach the exact solution. The corresponding rates q_{n} fluctuate widely and are of no interest.
7.6 Solving Multiple Nonlinear Algebraic Equations
So far in this chapter, we have considered a single nonlinear algebraic equation. However, systems of such equations arise in a number of applications, foremost nonlinear ordinary and partial differential equations. Of the previous algorithms, only Newton’s method is suitable for extension to systems of nonlinear equations.
7.6.1 Abstract Notation
7.6.2 Taylor Expansions for MultiVariable Functions
We follow the ideas of Newton’s method for one equation in one variable: approximate the nonlinear f by a linear function and find the root of that function. When n variables are involved, we need to approximate a vector functionF(x) by some linear function \(\tilde {\boldsymbol {F}} = \boldsymbol {J}\boldsymbol {x} + \boldsymbol {c}\), where J is an n × n matrix and c is some vector of length n.
The matrix ∇F is called the Jacobian of F and often denoted by J.
7.6.3 Newton’s Method
 1.
Solve the linear system J(x_{i})δ = −F(x_{i}) with respect to δ.
 2.
Set x_{i+1} = x_{i} + δ.
When nonlinear systems of algebraic equations arise from discretization of partial differential equations, the Jacobian is very often sparse, i.e., most of its elements are zero. In such cases it is important to use algorithms that can take advantage of the many zeros. Gaussian elimination is then a slow method, and (much) faster methods are based on iterative techniques.
7.6.4 Implementation
Here is a very simple implementation of Newton’s method for systems of nonlinear algebraic equations:
We can test the function Newton_system with the 2 × 2 system (7.10) and (7.11):
Here, the testing is based on the L2 norm^{6} of the error vector. Alternatively, we could test against the values of x that the algorithm finds, with appropriate tolerances. For example, as chosen for the error norm, if eps=0.0001, a tolerance of 10^{−4} can be used for x[0] and x[1].
7.7 Exercises
Exercise 7.1: Understand Why Newton’s Method Can Fail
The purpose of this exercise is to understand when Newton’s method works and fails. To this end, solve \(\tanh x=0\) by Newton’s method and study the intermediate details of the algorithm. Start with x_{0} = 1.08. Plot the tangent in each iteration of Newton’s method. Then repeat the calculations and the plotting when x_{0} = 1.09. Explain what you observe.
Filename: Newton_failure.∗.
Exercise 7.2: See If the Secant Method Fails
 1.
x_{0} = 1.08 and x_{1} = 1.09
 2.
x_{0} = 1.09 and x_{1} = 1.1
 3.
x_{0} = 1 and x_{1} = 2.3
 4.
x_{0} = 1 and x_{1} = 2.4
Exercise 7.3: Understand Why the Bisection Method Cannot Fail
Solve the same problem as in Exercise 7.1, using the bisection method, but let the initial interval be [−5, 3]. Report how the interval containing the solution evolves during the iterations.
Filename: bisection_nonfailure.∗.
Exercise 7.4: Combine the Bisection Method with Newton’s Method
An attractive idea is to combine the reliability of the bisection method with the speed of Newton’s method. Such a combination is implemented by running the bisection method until we have a narrow interval, and then switch to Newton’s method for speed.
Write a function that implements this idea. Start with an interval [a, b] and switch to Newton’s method when the current interval in the bisection method is a fraction s of the initial interval (i.e., when the interval has length s(b − a)). Potential divergence of Newton’s method is still an issue, so if the approximate root jumps out of the narrowed interval (where the solution is known to lie), one can switch back to the bisection method. The value of s must be given as an argument to the function, but it may have a default value of 0.1.
Try the new method on \(\tanh (x)=0\) with an initial interval [−10, 15].
Filename: bisection_Newton.py.
Exercise 7.5: Write a Test Function for Newton’s Method
The purpose of this function is to verify the implementation of Newton’s method in the Newton function in the file nonlinear_solvers.py. Construct an algebraic equation and perform two iterations of Newton’s method by hand or with the aid of SymPy. Find the corresponding size of f(x) and use this as value for eps when calling Newton. The function should then also perform two iterations and return the same approximation to the root as you calculated manually. Implement this idea for a unit test as a test function test_Newton().
Filename: test_Newton.py.
Exercise 7.6: Halley’s Method and the Decimal Module
 a)
Implement Halley’s method as a function Halley. Place the function in a module that has a test block, and test the function by solving x^{2} − 9 = 0, using x_{0} = 1000 as your initial guess.
 b)
Compared to Newton’s method, more computations per iteration are needed with Halley’s method, but a convergence rate of 3 may be achieved close to the root. You are now supposed to extend your module with a function compute_rates_decimal, which computes the convergence rates achieved with your implementation of Halley (for the given problem).
The implementation of compute_rates_decimal should involve the decimal module (you search for the right documentation!), to better handle very small errors that may enter the rate computations. For comparison, you should also compute the rates without using the decimal module. Test and compare with several parameter values.
Hint
The logarithms in the rate calculation might require some extra consideration when you use the decimal module.
Filename: Halleys_method.py.
Exercise 7.7: Fixed Point Iteration
For comparison, however, you will first be asked to solve the equation by Newton’s method (which, in fact, can be seen as fixed point iteration^{8}).
 a)
Write a program that solves this equation by Newton’s method. Use x = 1 as your starting value. To better judge the printed answer found by Newton’s method, let the code also plot the relevant function on the given interval.
 b)
The given equation may be rewritten as \(x = \frac {e^{x}  x^3}{2}\). Extend your program with a function fixed_point_iteration, which takes appropriate parameters, and uses fixed point iteration to find and return a solution (if found), as well as the number of iterations required. Use x = 1 as starting value.
When the program is executed, the original equation should be solved both with Newton’s method and via fixed point iterations (as just described). Compare the output from the two methods.
Exercise 7.8: Solve Nonlinear Equation for a Vibrating Beam
 a)
Plot the equation to be solved so that one can inspect where the zero crossings occur.
Hint
When writing the equation as f(β) = 0, the f function increases its amplitude dramatically with β. It is therefore wise to look at an equation with damped amplitude, g(β) = e^{−β}f(β) = 0. Plot g instead.
 b)
Compute the first three frequencies.
Footnotes
 1.
Read more about Newton’s method, e.g., on https://en.wikipedia.org/wiki/Newto%27s_method.
 2.
The term scheme is often used as a synonym for method or computational recipe.
 3.
Professional programmers would avoid calling sys.exit inside a function. Instead, they would raise a new exception with an informative error message, and let the calling code have another tryexcept construction to stop the program.
 4.
 5.
An implemented numerical solution algorithm is often called a solver.
 6.
 7.
 8.
Copyright information
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.