Abstract
This is a self-contained presentation of integration in the complex plane. Beginning with line integrals and the elements of complex function theory, the Cauchy-Riemann equations are derived and the concept of an analytic function is introduced. That is followed with discussions of the integral theorems of Green and Cauchy, integrand singularities, the residue theorem, and the complications caused by multi-valued integrands (which leads to the concepts of branch points and branch cuts). Numerous detailed examples are included, in each discussion, of integrating along closed curves in the complex plane. The grand conclusion is that if such curves are properly constructed (that is, they include the infinite or semi-infinite real axis and handle any singularities present correctly), then the calculation of a wide variety of real-valued, improper definite integrals can be done.
8.1 Prelude
In this, the penultimate chapter of the book, I’ll give you a really fast, stripped-down, ‘crash-course’ presentation of the very beginnings of complex function theory, and the application of that theory to one of the gems of mathematics: contour integration and its use in doing definite integrals. As an historian of mathematics recently wrote, “A curious feature of mathematical analysis in the years around 1800 was the use of complex variables to evaluate real definite integrals. The practice had begun with Euler [recall the derivation of (7.5.6) and (7.5.7)] … In his Mémoire on this topic that he presented in 1814 Cauchy commented that many of the integrals had been evaluated for the first time ‘by means of a kind of induction’ based on ‘the passage from the real to the imaginary’ and that no less a figure than Laplace had remarked that the method ‘however carefully employed, leaves something to be desired in the proofs of the results.’ Cauchy accordingly set himself the task of finding a ‘direct and rigorous analysis’ of this dubious passage.”Footnote 1
As we start this chapter on what came about from Cauchy’s labors, I’ll assume only that you are familiar with complex numbers and their manipulation. I’ve really already done that, of course, in Chap. 7, and so I think I am on safe ground here with that assumption. The first several sections will lay the theoretical groundwork and then, quite suddenly, you’ll see how they all come together to give us the beautiful and powerful technique of contour integration. None of these preliminary sections is very difficult, but each is absolutely essential for understanding. Don’t skip them!
In keeping with the spirit of this book, the presentation leans heavily on intuitive plausibility arguments and, while I don’t think I do anything wildly outrageous, there will admittedly be occasions where professional mathematicians might feel tiny stabs of pain. (Mathematicians are a pretty hardy bunch, though, and they will survive!) This may be the appropriate place to quote the mathematician John Stalker (of Trinity College, Dublin), who once wrote “In mathematics, as in life, virtue is not always rewarded, nor vice always punished [my emphasis].”Footnote 2 As always, I’ll feel vindicated when, after doing a series of manipulations, MATLAB’s numerical calculations agree with whatever theoretical result we’ve just derived.
8.2 Line Integrals
Imagine two points, A and B, in the two-dimensional x,y plane. Further, imagine that A and B are the two end-points of the curve C in the plane, as shown in Fig. 8.2.1. A is the starting end-point and B is the terminating end-point. Now, suppose that we divide C into n parts (or arcs), with the k-th arc having length Δsk (where k runs from 1 to n). Each of these arcs has a projection on the x-axis, where we’ll write Δxk as the x-axis projection of Δsk. In the same way, we’ll write Δyk as the y-axis projection of Δsk. Again, see Fig. 8.2.1. Finally, we’ll assume, as n → ∞, that Δsk → 0, that Δxk → 0, and that Δyk → 0, for each and every k (that is, the points along C that divide C into n arcs are distributed, loosely speaking, ‘uniformly’ along C).
Continuing, suppose that we have some function h(x, y) that is defined at every point along C. If we form the two sums \( {\sum \limits}_{\mathrm{k}=1}^{\mathrm{n}}\mathrm{h}\left({\mathrm{x}}_{\mathrm{k}},{\mathrm{y}}_{\mathrm{k}}\right)\Delta {\mathrm{x}}_{\mathrm{k}} \) and \( {\sum \limits}_{\mathrm{k}=1}^{\mathrm{n}}\mathrm{h}\left({\mathrm{x}}_{\mathrm{k}},{\mathrm{y}}_{\mathrm{k}}\right)\Delta {\mathrm{y}}_{\mathrm{k}} \) where (xk, yk) is an arbitrary point in the arc Δsk, then we’ll write the limiting values of these sums asFootnote 3
and
The C’s at the bottom of the integral signs in (8.2.1) and (8.2.2) are there to indicate that we are integrating from A to B along C. We’ll call the two limits in (8.2.1) and (8.2.2) line integrals (sometimes the term path integral is used, commonly by physicists). If A = B (that is, if C is a closed loopFootnote 4) then the result is called a contour integral. When we encounter contour integrals it is understood that C never crosses itself (such a C is said to be simple). Further, it is understood that a contour integral is done in the counter-clockwise sense; to reverse the direction of integration will reverse the algebraic sign of the integral.
The value of a line integral depends, in general, on the coordinates of A and B, the function h(x, y), and on the specific path C that connects A and B. For example, suppose that A = (0, 0), B = (1, 1), and that h(x, y) = xy. To start, let’s suppose that C = C1 is the broken path shown in Fig. 8.2.2. The first part is along the x-axis from x = 0 to x = 1, and then the second part is straight-up from x = 1 (y = 0) to x = 1 (y = 1). So, for this path we have y = 0 along the x-axis (and so h(x, y) = 0), and x = 1 on the vertical portion of C1 (and so h(x, y) = y). Thus, our two line integrals on this path are
and
Along the path C2, on the other hand, we have y = x from A to B, and so h(x, y) = x2 (or, equivalently, y2). So, on this path the line integrals are
and
Clearly, the values of the Ix, Iy line integrals are path-dependent and, for a given path, the Ix, Iy line integrals may or may not be equal. We can combine the Ix and Iy line integrals to write the line integral along C as IC = Ix + iIy, and so \( {\mathrm{I}}_{{\mathrm{C}}_1}=i\frac{1}{2} \) while \( {\mathrm{I}}_{{\mathrm{C}}_2}=\frac{1}{3}+i\frac{1}{3} \).
Looking back at the previous section, notice that in Fig. 8.2.2 we could write the unbroken line segment AB as z = x + iy or, since y = x, z = x + ix = x(1 + i) and so dz = (1 + i)dx. Then, as h(x, y) = h(x, x) = x2, we have
which is just as we calculated before.
For now, we’ll put aside these considerations and turn to expanding this book’s discussion from functions of a real variable to functions of a complex variable. Soon, however, you’ll see how this expanded view of functions will ‘circle back’—how appropriate!—to closed contour line integrals, and what we’ve done in this section will prove to be most useful.
8.3 Functions of a Complex Variable
I will write the complex variable z as
where x and y are each real with each varying over the doubly-infinite interval −∞ to +∞, and \( i=\sqrt{-1} \). Geometrically, we’ll interpret z as a point in an infinite, two-dimensional plane (called the complex plane) with x measured along a horizontal axis and y measured along a vertical axis. And we’ll write a complex function of the complex variable z as
where u and v are each real-valued functions of the two real-valued variables x and y. For example, if
then, in this case, u = x2 − y2 and y = 2xy. In x, y notation, we are said to be working in rectangular (or Cartesian) coordinates.
It is often convenient to work in polar coordinates , which means we write the complex variable z as
where r and θ are each real: r is the radial distance from the origin of the coordinate system of the complex plane to the point z (and so 0 ≤ r < ∞), and θ is the angle of the radius vector (of length r) measured counter-clockwise from the positive horizontal x-axis to the radius vector (and so we generally take 0 ≤ θ < 2π, although −π ≤ θ < π is also commonly assumed). You’ll recall that we did this in deriving (7.5.6) and (7.5.7). Note, carefully, that θ is not uniquely determined, as we can add (or subtract) any multiple of 2π from θ and still be talking about the same physical point in the complex plane.
From Euler’s identity we have from (8.3.3) that
For example, if f(z) = z2 then
or, expanding both sides of the last equality,
Since the real and imaginary parts of the expressions in the last equality must be separately equal, we conclude that cos(2θ) = cos2(θ) − sin2(θ) as well as sin(2θ) = 2 cos (θ) sin (θ). These two formulas are, of course, the well-known double-angle formulas from trigonometry, and so already we have a nice illustration of the powerful ability of complex functions to do useful work for us.Footnote 5
I’ll end this section with two more spectacular demonstrations of that power. First, the calculation of
an integral I am absolutely sure you have never seen done by the ‘routine’ integration techniques of freshman calculus. We’ll do it here (using the polar form of z) with a contour integration in the complex plane. With z = eiθ, which puts z on the unit circle (r = 1) centered on the origin, we can write
because \( \frac{1}{\mathrm{z}}={\mathrm{e}}^{-i\uptheta} \) and Euler’s identity says this is cos(θ) − i sin (θ). Now, consider the complex function
which we’ll integrate counter-clockwise once around the unit circle. That is, we’ll compute
where C is the circle z = eiθ. (The circle with the CCW arrowhead on the integral sign is there simply to emphasize that we are working with a closed line integral.)
The reason for the z in the denominator of the integrand is because dz = ieiθdθ and we need an eiθ in the denominator to cancel the eiθ in dz. So,
That is, the contour integral at the left in (8.3.5) is the integral we wish to calculate (multiplied by i). To directly calculate the contour integral, we start by expanding the exponential in the left-most integral in a power series. That is,
Using the binomial theorem to write
we have
Now, concentrate on that last integral, where we’ll replace z with eiθ and dz. with ieiθdθ:
This is remarkable! Every one of these integrals on the right vanishes as n and k run through their values except for those cases where \( \mathrm{k}=\frac{\mathrm{n}}{2} \). This has a profound implication, as then k can be an integer (which of course it is) only if n is even. For all odd values of n the integrals vanish, and in the cases of n even they vanish, too, if \( \mathrm{k}\ne \frac{\mathrm{n}}{2} \). We can include all the integrals that don’t vanish with the simple trick of writing n = 2 m, where m = 0, 1, 2, 3, …, and so we have
From (8.3.5) we can now write
or, cancelling the i’s, we have our answer:
The terms in the series on the right decrease very rapidly and so the series quickly converges. Using just the first four terms the sum is \( 2\uppi \left(1+\frac{1}{4}+\frac{1}{64}+\frac{1}{2,304}\right)=7.95488 \) and MATLAB agrees, as integral(@(x)exp(cos(x)),0,2∗pi) = 7.95492… .
For the final demonstration in this section (this one from physics) of the amazing utility of complex functions, imagine a point mass m moving in a plane along the path given by (8.3.3),
where now z, r and θ are specifically indicated to be functions of time (t). (The meaning of each of these variables is as given at the beginning of this section.) The motion of m is due entirely to a force acting along the line connecting the mass to the source of the force: the classic example of this situation is the Earth (the ‘point’ mass) moving under the influence of the gravitational field of the Sun (which we’ll take as being at the origin of the x-y coordinate system). The attractive force on the Earth is, of course, always directed radially inward towards the Sun.
If we write the magnitude of the force on m as f(r, θ), Newton’s famous second law of motion (‘force is mass times acceleration’) says
From (8.3.7) we have
and so
or,
Using (8.3.9) in (8.3.8) and cancelling all the eiθ (which are never zero), we arrive at
Equating real and imaginary parts of this last expression gives us the famous differential equations of motion in a radial force field:
and
Interestingly, the result in (8.3.11) was implicitly known long before Newton. Mathematics alone tells us that
and, since the expression in the square brackets is zero by (8.3.11), we have
Thus, integration gives us
where C is a constant. This result has a historically important physical interpretation in the theory of planetary motion.
Look at Fig. 8.3.1, which shows a planet’s location at times t and t + Δt, with the Sun at the origin of our coordinate system. We assume Δt is so small that the angular change Δθ in the radius vector’s angle is also very small, and that the length of the radius vector remains essentially unchanged. Then, the area between the two dashed lines is essentially that of an isosceles triangle with area ΔA given by
Dividing through by Δt gives
an expression that becomes exact as Δt → 0. That is, replacing the delta quantities with differential ones, we have
or, from (8.3.12),
This last expression is the mathematical form of the statement (given in 1609) by the German astronomer Johann Kepler (1671–1630) of his famous area law: the line joining the Sun to a planet sweeps over equal areas in equal time intervals. Kepler deduced this (the second of three general laws he discovered) not by physics or complex function theory, but rather from years of tedious observational data made with the naked eye.
8.4 The Cauchy-Riemann Equations and Analytic Functions
Complex function theory really starts with the study of what it means to talk of the derivative of f(z). In real function theory, the derivative of g(x) at x = x0 is defined as
We do almost the same thing with a complex function. Indeed, the formal definition for the derivative of a complex f(z) at z = z0 is
The vanishing of z = x + iy is, however, not quite as straightforward as it is in the case of a real variable. In that simpler case, where we let x → 0 to calculate g'(x0), x only has to vanish along the one-dimensional real axis. That is, x can shrink to zero in just two ways: either from the left of x0 or from the right of x0. But in the complex case we must take into account that, since z0 is a point in the complex, two-dimensional plane, then z can shrink to zero in an infinity of different ways (from the left of z0, from the right of z0, from below z0, from above z0 or, indeed, from any direction of the compass). So, just how does z → 0?
Mathematicians consider the most condition-free definition possible for the derivative to be the best definition, and so their answer to our question is: we want f'(z0) to be the same independent of how z → 0. To have this be the case, as you might suspect, comes with a price. If f = u + iv then the price for a derivative at z = z0 that doesn’t depend on the precise nature of how z → 0 is that u and v cannot be just any functions of x and y, but rather must satisfy certain conditions . If these conditions are satisfied at z = z0 and at all points in a region (domain or neighborhood are terms that are also used) surrounding z0, then we say that f(z) is an analytic function in that region (not to be confused with the analytic signal from radio theory that we encountered in the previous chapter).
The conditions for f(z) to be analytic are called the Cauchy-Riemann (C-R) equations,Footnote 6 which are actually pretty easy to state: at z = z0 it must be true that
and
For example, suppose that f(z) = z. That is, f(x, y) = x + iy = u(x, y) + i v(x, y) which means that u(x, y) = x and that v(x, y) = y. Then,
and we see that the C-R equations are satisfied. Indeed, since the C-R equations are independent of z (of z0) then f(z) = z is analytic over the entire finite complex plane.Footnote 7As a counter-example, of a f(z) that is nowhere analytic, consider \( \mathrm{f}\left(\mathrm{z}\right)=\overline{\mathrm{z}}=\mathrm{x}-i\mathrm{y} \), where \( \overline{\mathrm{z}} \) is the conjugate of z. Then,
and so (8.4.1) is never satisfied.
Under not particularly harsh requirements the C-R equations are necessary and sufficient conditions for f(z) to be analytic, and I’ll refer you to any good text devoted to complex function theory for a proof of this.Footnote 8 To show that the C-R equations are necessary is not at all difficult, however. Since z = x + iy then to have z → 0 requires that both x → 0 and y → 0. That is, to speak of the derivative of f(z) at z = z0 means to calculate
Now, out of the infinity of ways that both \( \Delta \mathrm{x} \) and \( \Delta \mathrm{y} \) can vanish, let’s consider just two. First, assume that \( \Delta \mathrm{y}=0 \) and so \( \Delta \mathrm{z}=\Delta \mathrm{x} \). That is, z approaches z0 parallel to the x-axis. Second, assume that \( \Delta \mathrm{x}=0 \) and so \( \Delta \mathrm{z}=\mathrm{i}\Delta \mathrm{y} \). That is, z approaches z0 parallel to the y-axis. If f′(z0) is to be unique, independent of the details of how \( \Delta \mathrm{z}\to 0 \), then these two particular cases must give the same result. In the first case we have
And in the second case we have
Equating the real and the imaginary parts of these two expressions for f′(z0) gives the C-R equations in (8.4.1) and (8.4.2).
Analytic functions are clearly a rather special subset of all possible complex functions, but certain broad classes are included. They are:
-
1.
Every polynomial of z is analytic;
-
2.
Every sum and product of two analytic functions is analytic;
-
3.
Every quotient of two analytic functions is analytic except at those values where the denominator function is zero;
-
4.
An analytic function of an analytic function is analytic.
So, from (1) f(z) = z2 and f(z) = ez are both analytic, because in the first case f(z) is a polynomial and in the second case because the exponential can be expanded in a power series. From (2) f(z) = z2ez is analytic, and from (3) f(z) = ez/(z2 + 1) is analytic except at z = ± i which are called the singularities of f(z) because, at those values of z, f(z) blows-up.Footnote 9 And finally, from (4) \( \mathrm{f}\left(\mathrm{z}\right)={\mathrm{e}}^{{\mathrm{e}}^{\mathrm{z}}} \) is analytic.
8.5 Green’s Integral Theorem
In this section we’ll continue our earlier discussion of line integrals to derive what is called Green’s theorem.Footnote 10 We begin by imagining a closed path (contour) C that encloses a region R of the complex plane, as shown in Fig. 8.5.1. We further imagine that there are two real functions of the real variables x and y, P(x, y) and Q(x, y), defined at every point along C and in the region R (the interior of C). Then, Green’s theorem says that
The circle on the line integral in the left-hand side of (8.5.1) is there to emphasize that C is a closed, non-self-intersecting path (a simple curve traversed in the CCW sense, as mentioned in Sect. 8.2). R is called a simply connected region, which means every closed curve in R encloses only points in R. If a region is not simply connected then it is said to be multiply-connected : an example is a simply connected region that has a hole cut in it. The points ‘in the hole’ are considered to be in the exterior of C.
Green’s theorem relates a contour integral along C to an area integral over the interior of C. For the contour of Fig. 8.5.1 it’s pretty obvious where the interior of C is, but in just a bit we’ll encounter contours whose interiors won’t be so obvious. Here’s an easy, low-level way to always locate the interior of a C: as you walk along C in the CCW sense, imagine you drag both hands along the ground. Your left hand will be in the interior, while your right hand will be in the exterior of C. (The idea that a simple, closed curve divides the plane into two regions—its interior and its exterior—seems pretty obvious. But not so obvious that mathematicians have nonetheless felt the need to term it the Jordan Curve Theorem , after the French mathematician Camille Jordan (1838–1922), who stated it in 1887.)
To prove Green’s theorem isn’t difficult, or at least it isn’t if we make some highly simplifying assumptions. These assumptions are actually not required, but to remove them complicates the proof. To start, our first assumption is that R is a rectangular patch oriented parallel to the x and y axes, as shown in Fig. 8.5.2. (I’ve drawn the patch totally in the first quadrant, but that’s just the way I drew it—in all that follows that is irrelevant.) The boundary edge of R is C = C1 + C2 + C3 + C4, which simply means that C is made of four sides. When we are done with this special R, I’ll make some admittedly hand-waving (but plausible, too, I hope) arguments to try to convince you that far more complicated shapes for R are okay, too.
Starting with the \( {\iint}_{\mathrm{R}}-\frac{\mathrm{\partial P}}{\mathrm{\partial y}}\mathrm{dx}\ \mathrm{dy} \) term on the right-hand side of Green’s theorem, we have
Notice, carefully, that in the last two integrals I have dropped the subscripts on y0 and y1, subscripts that were included in the earlier integrals. I can do that because the subscripts were originally there to distinguish between integrating along the lower edge (y0) or along the upper edge (y1) of R, and that job is now done in the last two integrals by writing C1 (the lower edge) and C3 (the upper edge) beneath the appropriate integral sign. Notice, too, that writing \( {\int}_{{\mathrm{y}}_0}^{{\mathrm{y}}_1}\frac{\mathrm{\partial P}}{\mathrm{\partial y}}\mathrm{dy}=\mathrm{P}\left(\mathrm{x},{\mathrm{y}}_1\right)-\mathrm{P}\left(\mathrm{x},{\mathrm{y}}_0\right) \) makes the assumption that there is no discontinuity in \( \frac{\mathrm{\partial P}}{\mathrm{\partial y}} \), that is, the partial derivative is continuous.
Similar integrals with respect to x can be written for the other two edges (C2 and C4) as well and, since those are vertical edges, we know that everywhere along them dx = 0. That is,
and
Since those integrals vanish we can formally add them to our C1 and C3 integrals without changing anything. So,
If you repeat all the above for the \( {\iint}_{\mathrm{R}}\frac{\mathrm{\partial Q}}{\mathrm{\partial x}\ }\ \mathrm{dx}\ \mathrm{dy} \) term in Green’s theorem, and observe dy = 0 along the horizontal edges C1 and C3, you should easily see that
and that completes the proof of Green’s theorem for our nicely oriented rectangle in Fig. 8.5.2. In fact, however, this proof extends rather easily to other much more complicated shapes for R.
In Fig. 8.5.3, for example, you see how a semicircular disk can be constructed from many very thin rectangles—the thinner they each are the more of them there are, yes, but that’s okay; make them all each as thin as the finest onion-skin paper, if you like—the thinner they are the better they approximate the half-disk. If the boundary edge of the half-disk is denoted by C, and if the complete (all four edges) boundaries of the individual rectangles are denoted by C1, C2, C3, …, then
because those edges of the individual rectangular boundaries that are parallel to the x-axis are traversed twice, once in each sense (CW and CCW), and so their contributions to the various line integrals on the right-hand-side of the above equation cancel. The only exception to this cancellation is the very bottom horizontal edge of the half-disk.
In addition, the integrations along the individual vertical edges of the thin rectangles avoid cancellation and, if the rectangles are very thin then the union of the vertical edges is the circular portion of the half-disk boundary. So, after integrating around all the rectangles, we are left with nothing more than integrating along the bottom of the half-disk and the circular portion. You can see that, using this same basic idea, we can build very complicated shapes out of appropriately arranged rectangles and, since Green’s theorem works for each rectangle, then it works for all of them together and so Green’s theorem works for their composite (and perhaps quite complicated) region R.
8.6 Cauchy’s First Integral Theorem
By convention, the theorem in this section is named after Cauchy who published it in 1814, but in a letter dated December 1811, written by the great Gauss to his fellow German mathematician Friedrich Wilhelm Bessel (1784–1846), he states (without proof) the theorem we will prove here. In mathematics, alas for Gauss (as if he really needed more to add to his enormous resumé), credit goes to the first to publish.
Well, it’s taken a bit to get to this point, but it will soon be clear it was worth the effort. Our basic result is easy to state: if f(z) is analytic everywhere on and inside C then
To show this, recall (8.3.1) and (8.3.2). That is, with f(z) = u(x, y) + i v(x, y) and writing dz = dx + i dy, we have
or,
Now, because of Green’s theorem, the two contour integrals on the right are each equal to zero. To see this, consider the first integral on the right-hand side of (8.6.2), and look back at (8.5.1). You see that we have P(x, y) = u(x, y) and Q(x, y) = −v(x, y), and so the partial derivatives on the right-hand side of (8.5.1) are
The C-R equation of (8.4.2), which holds here because we are assuming f(z) is analytic, says the integrand of the double integral in Green’s theorem is
For the second integral on the right-hand side of (8.6.2) we have P(x, y) = v(x, y) and Q(x, y) = u(x, y). So now
and the C-R equation of (8.4.1) says the integrand of the double integral in Green’s theorem is
because, again, f(z) is analytic. So, (8.6.1) is proven.
There is no denying that (8.6.1) looks pretty benign. But it has tremendous power. For example, consider the case of
which is analytic everywhere except at z = 0 because there f(z) blows-up. So, if we integrate f(z) around any C that avoids putting z = 0 in its interior, we know from (8.6.1) that we’ll get zero for the integral. With that in mind, consider the contour C shown in Fig. 8.6.1, where ε > 0 and T is finite, and the two arcs are circular. In the notation of Fig. 8.6.1, we have
For each of the four segments of C, we can write:
on C1: z = x, dz = dx;
on C2: z = Teiθ, dz = iTeiθdθ, \( 0<\uptheta <\frac{\uppi}{2} \);
on C3: z = i y, dz = i dy;
on C4: z = εeiθ, dz = iεeiθdθ, \( \frac{\uppi}{2}>\uptheta >0 \);
Thus, (8.6.3a) becomes
Then, doing all the obvious cancellations and reversing the direction of integration on the third and fourth integrals (and, of course, their algebraic signs, too), we arrive at
If, in the last integral, we change the dummy variable of integration from y to x, we then have
Now, focus on the second integral and expand its integrand with Euler’s identity:
If we let T → ∞ and ε → 0 then the first term on the right goes to zero because \( {\lim}_{\mathrm{T}\to \infty }{\mathrm{e}}^{-\mathrm{Tsin}\left(\uptheta \right)}=0 \) for all \( 0<\uptheta <\frac{\uppi}{2} \), while the second term goes to 1 because
Thus, (8.6.3) becomes, as T → ∞ and ε → 0,
or, using Euler’s identity again,
or,
Equating imaginary parts, we have
which we’ve already derived in (3.2.1)—and it’s certainly nice to see that our contour integration agrees—while equating real parts gives
MATLAB agrees, too, as integral(@(x)(cos(x)-exp(−x))./x,0,1e4) = −3x10−5 which, while not exactly zero, is pretty small. You’ll recall this integral as a special case of (4.3.14), derived using ‘normal’ techniques.
You can see that the ability of contour integration in the complex plane to do improper real integrals, integrals like \( {\int}_0^{\infty } \)and \( {\int}_{-\infty}^{\infty } \), depends on the proper choice of the contour C. At the start, C encloses a finite region of the plane, with part of C lying on the real axis. Then, as we let C expand so that the real axis portion expands to −∞ to ∞, or to 0 to ∞, the other portions of C result in integrations that are, in some sense, ‘easy to do.’
The calculation of (8.6.4) was a pretty impressive example of this process, but here’s another application of Cauchy’s first integral theorem that, I think, tops it. Suppose a, b, and c are any real numbers (a ≠ 0) such that b2 > 4ac. What is the value of the integral
I think you’ll be surprised by the answer. Here’s how to calculate it, starting with the contour integral
where we notice that the integrand has two singularities that are both on the real axis, as shown in Fig. 8.6.2. That’s because of the given b2 > 4ac condition, which says the denominator vanishes at the two real values (remember the quadratic formula!) of
The equality b2 = 4ac is the case where the two real roots have merged to form a double root which, as you’ll soon see, does add a twist to our analysis.
In Fig. 8.6.2 I’ve shown the singularities as being on the negative real axis, but they could both be on the positive real axis—just where they are depends on the signs of a and b. All that actually matters, however, is that the singularities are both on the real axis. This means, when we select C, that we must arrange for its real axis portion to avoid those singularities as you’ll remember the big deal I made on that very point in Chap. 1 with the discussion there of the ‘sneaking up on a singularity’ trick. With contour integration we don’t so much ‘sneak up’ on a singularity as ‘swing around and avoid’ it, which we do with the C2 and C4 portions of C shown in Fig. 8.6.2 (and take a look back at Fig. 8.6.1, too, with its C4 avoiding a singularity at z = 0). Those circular swings (called indents) are such as to keep the singularities in the exterior of C. Each indent has radius ε, which we’ll eventually shrink to zero by taking the limit ε → 0.
So, here’s what we have, with C = C1 + C2 + C3 + C4 + C5 + C6.
on C1, C3, C5: z = x, dz = dx;
on C2: z = x2 + εeiθ, dz = iεeiθdθ, π > θ > 0;
on C4: z = x1 + εeiθ, dz = iεeiθdθ, π > θ > 0;
on C6: z = Teiθ, dz = iTeiθdθ, 0 < θ < π.
Cauchy’s first integral theorem says
When we eventually let ε → 0 and T → ∞, the first three line integrals will combine to give us the real integral we are after. The value of that integral will therefore be given by
So, let’s now calculate each of these three line integrals.
For C2,
Since \( \left({\mathrm{ax}}_2^2+{\mathrm{bx}}_2+\mathrm{c}\right)=0 \) because x2 is a zero of the denominator (by definition), and as ε2 → 0 faster than ε → 0, then for very small ε we have
In the same way,
And finally,
and, since the integrand vanishes like \( \frac{1}{\mathrm{T}} \) as T → ∞, then
Thus,
Since
and
we see that the two singularities cancel each other and so we have the interesting result
for all possible values of a, b, and c. This, I think, is not at all obvious! (In Challenge Problem 8.5 you are asked to do an integral that generalizes this result.)
An immediate question that surely comes to mind now is, what happens if b2 ≤ 4ac? Consider the two cases b2 = 4ac and b2 < 4ac, separately. Suppose that b2 = 4ac. Then, the denominator of (8.6.5) is \( {\mathrm{a}\mathrm{x}}^2+\mathrm{bx}+\frac{{\mathrm{b}}^2}{4\mathrm{a}}=\mathrm{a}\left({\mathrm{x}}^2+\frac{\mathrm{b}}{\mathrm{a}}\mathrm{x}+\frac{{\mathrm{b}}^2}{4{\mathrm{a}}^2}\right)=\mathrm{a}{\left(\mathrm{x}+\frac{\mathrm{b}}{2\mathrm{a}}\right)}^2 \) and it is immediately obvious that \( {\int}_{-\infty}^{\infty}\frac{\mathrm{dx}}{\mathrm{a}{\left(\mathrm{x}+\frac{\mathrm{b}}{2\mathrm{a}}\right)}^2}\ne 0 \) (indeed, the integral blows-up!)Footnote 11
What if b2 < 4ac? If that’s the case the singularities of f(z) are no longer on the real axis but, instead, have non-zero imaginary parts. We’ll come back to this question in the next section, where we’ll find that the integral in (8.6.5) is, again, no longer zero under this new condition.
Contour indents around singularities are such a useful device that their application warrants another example. So, what I’ll do next is use indents to derive a result that would be extremely difficult to get by other means: calculating the value of
To evaluate this integral, we’ll study the contour integral
using the curious contour C shown in Fig. 8.6.3.
The reasons for choosing this particular C (looking a bit like a block of cheese that mice have been nibbling on) probably require some explanation. The real-axis portions (C1 and C3) are perhaps obvious, as eventually we’ll let T → ∞, and these parts of C (where z = x) will give us the integral we are after. That is, the sum of the C1 and C3 integrals is
The semi-circular indent (C2) with radius ε (we’ll eventually let ε → 0) around the origin is also probably obvious because z = 0 is a singularity of the integrand, and so you can see I’m trying to set things up to use Cauchy’s first integral theorem (which requires that C enclose no singularities). It’s the other portions of C, the two vertical sides (C4 and C8), and the two sides parallel to the real axis (C5 and C7), that are probably the ones puzzling you right now.
Since I am trying to avoid enclosing any singularities, you can understand why I am not using our previous approach of including a semi-circular arc from T on the positive real axis back to –T on the negative real axis, an arc that then expands to infinity as T → ∞. That won’t work here because the integrand has an infinity of singularities on the imaginary axis, spaced up and down at intervals of 2πi (because Euler’s identity tells us that 1 − ez = 0 has the solutions z = 2πik for k any integer). A semi-circular arc would end-up enclosing an infinite number of singularities!
There is another issue, too. The k = 0 singularity is the one we’ve already avoided on the real-axis, but why (you might ask) are we intentionally running right towards the singularity for k = 1 (at 2πi on the imaginary axis)? Isn’t the C in Fig. 8.6.3 just asking for trouble? Sure, we end-up avoiding that singularity with another semi-circular indent, but why not just run the top segment of C below the k = 1 singularity and so completely and automatically miss the singularity that way? Well, trust me— there is a reason, soon to be revealed.
Since we have arranged for there to be no singularities inside C we have, by Cauchy’s first integral theorem,
or, since on C1 and C3 we have z = x,
Soon, of course, we’ll be letting T → ∞ and ε → 0 in these integrals. Let’s now start looking at the ones on the right in more detail, starting with C4.
On C4 we have z = T + iy where 0 ≤ y ≤ 2π. The integrand of the C4 integral is therefore
As T → ∞ we see that the magnitude of the numerator blows-up like eaT (|eiay| = 1), while the magnitude of the denominator blows-up like eT. So, the magnitude of the integrand behaves like e(a−1)T as T → ∞ which means, since 0 < a < 1, that the integrand goes to zero and so we conclude that the C4 integral vanishes as T → ∞. In the same way, on C8 we have z = −T + iy with 2π > y > 0. The integrand of the C8 integral is
and so as T → ∞ we see that the magnitude of the numerator goes to zero as e−aT (because a is positive) while the magnitude of the denominator goes to 1. That is, the integrand behaves like e−aT and so the C8 integral also vanishes as T → ∞.
Next, let’s look at the C5 and C7 integrals. Be alert!—this is where you’ll see why running C right towards the imaginary axis singularity at 2πi is a good idea, even though we are going to avoid it ‘at the very last moment’ (so to speak) with the C6 semi-circular indent. On the C5 and C7 integrals we have z = x + 2πi and so dz = dx (just like on the C1 and C3 integrals). Writing-out the C5 and C7 integrals in detail, we have
or, because e2πi = 1 (this is the crucial observation!) we have the sum of C5 and C7 integrals as
Notice that, to within the constant factor −e2πai, this is the sum of the C1 and C3 integrals. We have this simplifying result only because we ran the top segment of C directly towards the 2πi singularity. All this means we can now write (8.6.7) as (because, don’t forget, the C4 and C8 integrals vanish as T → ∞):
On C2 we have z = εeiθ for π ≥ θ ≥ 0 and so dz = iεeiθdθ. Thus,
Recalling the power series expansion of the exponential and keeping only the first-order terms in ε (because all the higher-order terms go to zero even faster than does ε), we have
In the same way,
and so
On C6 we have z = 2πi + εeiθ for 0 ≥ θ ≥ − π and so dz = iεeiθdθ. Thus,
or, as we let ε → 0,
Plugging these two results for the C2 and C6 integrals into (8.6.8) and letting T → ∞ and ε → 0 we get
or
Be sure to carefully note that the value of the integral in (8.6.9) comes entirely from the vanishingly small semi-circular paths around the two singularities. Singularities, and integration paths of ‘zero’ length around them, matter! If \( \mathrm{a}=\frac{1}{4} \) the integral is equal to π and MATLAB agrees because (using our old trick of ‘sneaking up’ on the singularity at x = 0), integral(@(x)exp(x/4)./(1-exp(x)),-1e3, -.0001) + integral(@(x)exp(x/4)./(1-exp(x)),.0001,1e3) = 3.14154… .
Before leaving this section I should tell you that not every use of Cauchy’s first integral theorem is the calculation of the closed-form value of an integral. Another quite different and very nifty application is the transformation of an integral that is difficult to accurately calculate numerically into another equivalent integral that is much easier to calculate numerically. Two examples of this are
and
where a is a positive constant. These two integrals have no closed-form values, and each has to be numerically evaluated for each new value of a.
To do that, accurately, using the usual numerical integration techniques is not easy, for the same reasons I gave in the last chapter when we derived (7.5.2). That is, the integrands of both I(a) and J(a) are really not that small even for ‘large’ T, as the denominators increase slowly and the numerators don’t really decrease at all but simply oscillate endlessly between ±1. To numerically calculate I(a), for example, by writing integral(@(x)cos(x)./(x + a),0,T) with the numerical values of a and T inserted doesn’t work well. For example, if a = 1 then for the four cases of T = 5, 10, 50, and 100 we get
T | I(1) |
---|---|
5 | 0.18366… |
10 | 0.30130… |
50 | 0.33786… |
100 | 0.33828… |
The calculated values of I(1) are not stable out to more than a couple of decimal places, even for T = 100. A similar table for J(1) is
T | J(1) |
---|---|
5 | 0.59977… |
10 | 0.70087… |
50 | 0.60264… |
100 | 0.61296… |
These values for J(1) are even more unstable than are those for I(1).
What I’ll do now is show you how the first integral theorem can be used to get really excellent numerical accuracy, even with a ‘small’ value of T. What we’ll do is consider the contour integral
where C = C1 + C2 + C3 is the first quadrant circular contour shown in Fig. 8.6.4. The integrand has a lone singularity on the negative real axis at z = −a < 0, which lies outside of C. Thus, we immediately know from the first theorem that, for this C,
Now, for the three distinct sections of C, we have:
on C1: z = x and so dz = dx, 0 ≤ x ≤ T;
on C2: z = Teiθ, dz = iTeiθdθ, \( 0<\uptheta <\frac{\uppi}{2} \);
on C3: z = i y, dz = i dy, T ≥ y ≥ 0.
So, starting at the origin and going around C in the counterclockwise sense, (8.6.10) becomes
or,
Our next step is to look at what happens when we let T → ∞. Using Euler’s identity, we have
and so
and so
\( ={\lim}_{\mathrm{T}\to \infty}\left|{\mathrm{e}}^{-\mathrm{Tsin}\left(\uptheta \right)}\right|=0 \).Footnote 12
Thus, as T → ∞ we arrive at
That is,
or, equating real and imaginary parts, and changing the dummy variable of integration from y to x,
and
The new integrals on the right for I(a) and J(a) have integrands that decrease rapidly as x increases from zero.
Calculating I(1) and J(1) again, using these alternative integrals, we have the following new tables:
T | I(1) |
---|---|
5 | 0.342260… |
10 | 0.343373… |
50 | 0.343378… |
100 | 0.343378… |
T | J(1) |
---|---|
5 | 0.621256… |
10 | 0.621449… |
50 | 0.621450… |
100 | 0.621450… |
You can see from these tables the vastly improved numerical performance of our calculations, and we can now say with confidence that
8.7 Cauchy’s Second Integral Theorem
When we try to apply Cauchy’s first integral theorem we may find it is not possible to construct a useful contour C such that a portion of it lies along the real axis and yet does not have a singularity in its interior. The integral of (8.6.5) for the case of b2 < 4ac will prove to be an example of that situation, and I’ll show you some other examples in this section, as well. The presence of singularities inside C means that Cauchy’s first integral theorem no longer applies. ‘Getting around’ (pun intended!) this complication leads us to Cauchy’s second integral theorem: if f(z) is analytic everywhere on and inside C then, if z0 is inside C,
By successively differentiating with respect to z0 under the integral sign, it can be shown that all the derivatives of an analytic f(z) exist (we’ll use this observation in the next section):
where z0 is any point inside C and f(n) denotes the n-th derivative of f.
While f(z) itself has no singularities (because it’s analytic) inside C, the integrand of (8.7.1) does have a first-order singularityFootnote 13 at z = z0. Now, before I prove (8.7.1) let me show you a pretty application of it, so you’ll believe it will be well-worth your time and effort to understand the proof. What we’ll do is evaluate the contour integral
where C is the contour shown in Fig. 8.7.1, and a and b are each a positive constant. When we are nearly done, we’ll let T → ∞ and you’ll see we will have derived a famous result (one we’ve already done, in fact, in (3.1.7)), with the difference being that using Cauchy’s second integral theorem will be the easier of the two derivations! Along the real axis part of C we have z = x, and along the semicircular arc we have z = Teiθ, where θ = 0 at x = T and θ = π at x = −T. So,
The integrand of the contour integral can be written in a partial fraction expansion as
and so we have
Since the integrand of the second contour integral on the left-hand side is analytic everywhere inside of C—that integrand does have a singularity, yes, but it’s at z = −ib which is outside of C, as shown in Fig. 8.7.1—then we know from Cauchy’s first integral theorem that the second contour integral on the left-hand side is zero. And once T > b (remember, eventually we are going to let T → ∞) then the singularity for the remaining contour integral on the left is inside C, at z = ib. Thus,
The integrand of the contour integral on the left looks exactly like f(z)/(z − z0), with f(z) = eiaz and z0 = ib. Cauchy’s second integral theorem tells us that, if T > b, the contour integral is equal to 2πi f(z0), and so the left-hand side of the last equation is equal to
That is,
Now, if we at last let T → ∞ then, making the same sort of argument that we did concerning the line integral along the circular arc in the previous section, we see that the second integral on the left vanishes like \( \frac{1}{\mathrm{T}} \). And so, using Euler’s formula, we have
Equating imaginary parts we arrive at
which is surely no surprise since the integrand is an odd function of x. Equating real parts gives us the far more interesting
This is a result we’ve already derived using ‘routine’ methods—see (3.1.7). We also did it, using the concept of the energy spectrum of a time signal, in Challenge Problem 7.7 (you did do that problem, right?). As I’ve said before, it’s good to see contour integration in agreement with previous analysis.
Okay, here’s how to see what’s behind Cauchy’s second integral theorem. The proof is beautifully elegant. In Fig. 8.7.2 I have drawn the contour C and, in its interior, marked the point z0. In addition, centered on z0 I’ve drawn a circle C∗ with a radius ρ that is sufficiently small that C∗ lies completely in the interior of C. Now, imagine that, starting on C at some arbitrary point (call it A), we begin to travel along C in the positive (CCW) sense until we reach point a, whereupon we then travel inward to point b on C∗. Once at point b we travel CW (that is, in the negative sense) along C∗ until we return to point b. We then travel back out to C along the same path we traveled inward on until we return to point a. We then continue on along C in the CCW sense until we return to our starting point A.
Here’s the first of two crucially important observations on what we’ve just done. The complete path we’ve followed has always kept the annular region between C and C∗ to our left. That is, this path is the edge of a region which does not contain the point z0. So, in that annular region from which z = z0 has been excluded by construction, f(z)/(z − z0) is analytic everywhere. Thus, by Cauchy’s first integral theorem, since z = z0 is outside C we have
The reason for writing −C∗ in the path description of the contour integral is that we went around C∗ in the negative sense.
Here’s the second of our two crucially important observations. The two trips along the ab-connection between C and C∗ (mathematicians call this two-way connection a cross-cut) are in opposite directions and so cancel each other. That means we can write (8.7.2) as
The reason for the minus sign in front of the C∗ contour integral at the far-right of (8.7.3) is, again, because we went around C∗ in the negative sense. The two far-right integrals in (8.7.3) are in the positive sense, however, and so the minus sign has been moved from the −C∗ path descriptor at the bottom of the integral sign to the front of the integral, itself.
Now, while C is an arbitrary simple curve enclosing z0, C∗ is a circle with radius ρ centered on z0. So, on C∗ we can write z = z0 + ρeiθ (which means dz = iρeiθdθ) and, therefore, as θ varies from 0 to 2π on our one complete trip around C∗, (8.7.3) becomes
If the integral on the far left is to have a value then, whatever it is must be independent of ρ. After all, the integral at the far left has no ρ in it! So, the integral on the far right must be independent of ρ, too, even though it does have ρ in it. That means we must be able to use any value of ρ we wish. So, let’s use a value for ρ that is convenient.
In particular, let’s use a very small value, indeed one so small as to make the difference between f(z) and f(z0), for all z on C∗, as small as we like. We can do this because f(z) is assumed to be analytic, and so has a derivative everywhere inside C (including at z = z0), and so is certainly continuous there. Thus, as ρ → 0 we can argue f(z) → f(z0) all along C∗ and thus
Finally, pulling the constant f(z0) out of the integral, we have
which is (8.7.1) and our proof of Cauchy’s second integral theorem is done.
We can now do the integral in (8.6.5) for the case of b2 < 4ac. That is, we’ll now study the contour integral
The integrand of this integral has two singularities, neither of which is on the real axis. Since b2 < 4ac these singularities are complex, and are given by
and
In Fig. 8.7.3 I’ve shown these singular points having negative real parts, but they could be positive, depending on the signs of a and b. It really doesn’t matter, however: all that matters is that with the contour C drawn in the figure only one of the singular points is inside C (arbitrarily selected to be z1) while the other singularity (z2) is in the exterior of C.
Now, write the integrand as a partial fraction expansion:
Thus,
The second integral on the right is zero by Cauchy’s first integral theorem (the singularity z2 is not enclosed by C) and so
From Cauchy’s second integral theorem that we just proved (with f(z) = 1) we have
and so
But, the line integral around C is
and the θ-integral clearly vanishes like \( \frac{1}{\mathrm{T}} \) as T → ∞. Thus,
For example, if a = 5, b = 7, and c = 3 (notice that b2 = 49 < 4ac = 4(5)(3) = 60) then (8.7.4) says our integral is equal to \( \frac{2\uppi}{\sqrt{11}}=1.8944\dots \) and MATLAB agrees, as integral(@(x)1./(5*x.^2 + 7*x + 3),-1e5,1e5) = 1.8944… .
For a dramatic illustration of the first and second theorems, I’ll now use them to calculate an entire class of integrals:
where m and n are any non-negative integers such that (to insure the integral exists) n − m ≥ 2. What we’ll do is study the contour integral
with an appropriately chosen C. The integrand in (8.7.5) has n first-order singularities, at the n n-th roots of −1. These singular points are uniformly spaced around the unit circle in the complex plane. Since Euler’s formula tells us that
for k any integer, then these singular points are located at
For other values of k, of course, these same n points simply repeat. Now, let’s concentrate our attention on just one of these singular points, the one for k = 0. We’ll pick C to enclose just that one singularity, at \( \mathrm{z}={\mathrm{z}}_0={\mathrm{e}}^{i\frac{\uppi}{\mathrm{n}}} \), as shown in Fig. 8.7.4. The central angle of the wedge is \( \frac{2\uppi}{\mathrm{n}} \) and the singularity is at half that angle, \( \frac{\uppi}{\mathrm{n}} \).
As we go around C to do the integral in (8.7.5), the descriptions of the contour’s three portions are:
on C1: z = x, dz = dx, 0 ≤ x ≤ T;
on C2: \( \mathrm{z}={\mathrm{Te}}^{i\uptheta},\kern0.5em \mathrm{d}\mathrm{z}=i{\mathrm{Te}}^{i\uptheta}\mathrm{d}\uptheta, \kern0.5em 0\le \uptheta \le \frac{2\uppi}{\mathrm{n}} \);
on C3: \( \mathrm{z}={\mathrm{re}}^{i\frac{2\uppi}{\mathrm{n}}},\kern0.5em \mathrm{dz}={\mathrm{e}}^{i\frac{2\uppi}{\mathrm{n}}}\mathrm{dr},\kern0.5em \mathrm{T}\ge \mathrm{r}\ge 0 \);
So,
Now, clearly, as T → ∞ the θ-integral goes to zero because m + 1 < n. Also,
So, as T → ∞
Or, as
we have
Since
we can write the integrand of the contour integral in (8.7.5) as a partial fraction expansion:
where the N’s are constants. Integrating this expansion term-by-term, we get
since Cauchy’s first integral theorem says all the other integrals are zero because, by construction, C does not enclose the singularities z1, z2, …, zn−1. The only singularity C encloses is z0. Cauchy’s second integral theorem in (8.7.1), with f(z) = 1, says that the integral on the right is 2πi, and so (8.7.6) becomes
Our next (and final) step is to calculate N0. To do that, multiply through the partial fraction expansion of the integrand in (8.7.5) by z − z0 to get
and then let z → z0. This causes all the terms on the right after the first to vanish, and so
So, to resolve this indeterminacy, we’ll use L’Hôpital’s rule:
or, with \( {\mathrm{z}}_0={\mathrm{e}}^{i\frac{\uppi}{\mathrm{n}}} \),
Inserting this result into (8.7.7),
and so we have the beautiful result
For a specific example, you can confirm that m = 0 and n = 4 reproduces our result in (2.3.4). As a new result, if m = 0 and n = 3 then (8.7.8) says that.
\( {\int}_0^{\infty}\frac{\mathrm{dx}}{{\mathrm{x}}^3+1}=\frac{\frac{\uppi}{3}}{\sin \left\{\frac{\uppi}{3}\right\}}=\frac{\frac{\uppi}{3}}{\frac{\sqrt{3}}{2}}=\frac{2\uppi}{3\sqrt{3}}=1.209199\dots \) and MATLAB agrees:
integral(@(x)1./(x.^3 + 1),0,inf) = 1.209199… .
The result of (8.7.8) can be put into at least three alternative forms that commonly appear in the math literature. First, define t = xn and so \( \frac{\mathrm{dt}\ }{\mathrm{dx}}={\mathrm{n}\mathrm{x}}^{\mathrm{n}-1} \)
which means
Thus, (8.7.8) becomes
or,
Now, define
which saysFootnote 14
For example, if \( \mathrm{a}=\frac{1}{2} \) then
and MATLAB agrees as integral(@(x)1./(sqrt(x).*(x + 1)),0,inf) = 3.14159265… .
Another way to reformulate (8.7.8) is to start with (8.7.9) and define t = ln (x), and so
Thus, (8.7.9) becomes (because x = 0 means t = − ∞)
That is,
For example, if \( \mathrm{a}=\frac{1}{3} \) the integral equals \( \frac{\uppi}{\sin \left(\uppi /3\right)}=\frac{\uppi}{\raisebox{1ex}{$\sqrt{3}$}\!\left/ \!\raisebox{-1ex}{$2$}\right.}=2\frac{\uppi}{\sqrt{3}}=3.62759\dots \) and MATLAB agrees as integral(@(x)exp(x/3)./(1 + exp(x)),-1e3,1e3) = 3.62759… . It’s interesting to compare (8.7.10) with (8.6.9).
And finally, in (8.7.9) make the change of variable
and so
Then,
and so
I’ll end this section with two examples of the use of Cauchy’s second integral theorem. In the first one multiple singularities of first-order appear when, for the arbitrary positive constant a, we’ll calculate the value of
We can handle the multiple singularities by simply using the cross-cut idea from earlier in this section. That is, as we travel around a contour C, just move inward along a cross-cut to the first singularity and then travel around it on a tiny circle with radius ρ and then back out along the cross-cut to C. Then, after traveling a bit more on C do the same thing with a new cross-cut to the second singularity. And so on, for all the rest of the singularities. (‘Tiny’ means pick ρ small enough that none of the singularity circles intersect, and that all are always inside C.) For each singularity we’ll pick-up a value of 2πi f(z0), where the integrand of the contour integral we are studying is \( \frac{\mathrm{f}\left(\mathrm{z}\right)}{\left(\mathrm{z}-{\mathrm{z}}_0\right)} \).
So, what contour integral will we be studying? On the unit circle C we have
and so
as well as
Thus,
and so the contour integral we’ll study is
The integrand clearly has four first-order singularities, all located on the real axis at:
and
By inspection it is seen that for the first pair of singularities |z| > 1 and so both lie outside C, while for the second pair |z| < 1 and so both lie inside C. Specifically, let’s write z1 and z2 as the inside singularities where
and
while z3 and z4 are the outside singularities where
and
The integrand of the contour integral on the right in (8.7.13) is
or, making a partial fraction expansion,
Thus, the contour integral from (8.7.13) is
Since the singularities z = ± z3 lie outside the unit circle C, Cauchy’s first integral theorem tells us that the last two contour integrals on the right of (8.7.15) each vanish, and so the values of N3 and N4 are of no interest. That’s not the case for the first two contour integrals, however, as the singularities z = ± z1 lie inside the unit circle C. So, we need to calculate N1 and N2, and they can be easily determined as follows. For N1, multiply through (8.7.14) by (z − z1) and then let z → z1, to get
To find N2, multiply through (8.7.14) by (z + z1) and then let z → −z1, to get
Thus, Cauchy’s second integral theorem tells us that
Looking back at (8.7.13) one last time, the integral we are after is 4i times this last result, and so
If a = 3 this is 1.81379936…, and MATLAB agrees because integral(@(x)1./(3 + (sin(x).^2)),0,2*pi) = 1.81379936… .
For a second example of Cauchy’s second integral theorem, here’s a little twist to the calculations. We’ll evaluate
and so the obvious contour integral for us to study is
as the integrand reduces to our desired integrand when z is on the real axis (where z = x).
One immediate question is, of course, what’s C? And a second concern, which may (or may not) occur to you right away is that there is no apparent ‘(z − z0)’ in the denominator of the integrand (which the second theorem explicitly contains in its statement). The answers to these two concerns are, fortunately, not difficult to find. To start, let’s determine where the singularities (if any) of the integrand are located. Any singularities are the solutions to ez + e−z = 0, and thus e2z = −1. Since Euler’s identity tells us that, for k any integer, ei(π + 2πk) = −1, then there are singularities, infinite in number, all located on the imaginary axis at \( \mathrm{z}=i\uppi \left(\mathrm{k}+\frac{1}{2}\right) \), k = … − 2, − 1, 0, 1, 2, … . This result helps us in deciding what contour C we should use.
If you recall the discussion of the contour of Fig. 8.6.3, we avoided using a semi-circular contour then because of the very same situation we have now—an infinite number of singularities on the imaginary axis that an expanding semi-circular contour would enclose. Instead, we earlier used a rectangular contour of fixed height that, as we expanded its width (along the real axis), always enclosed just one singularity. We’ll do the same here, as well, with the contour of Fig. 8.7.5, with the single (k = 0) singularity \( {\mathrm{z}}_0=i\frac{\uppi}{2} \) enclosed by C = C1 + C2 + C3 + C4.This is actually an easier contour with which to work than was Fig. 8.6.3, as now we have no indents to consider.
To use Cauchy’s second theorem, we’ll write
where
That is, g(z0) in (8.7.17) is given by
To resolve this indeterminacy, we’ll use L’Hôpital’s rule. Thus,
Since \( \sin \left(\frac{\uppi}{2}\right)=1 \) and \( \cos \left(\frac{\uppi}{2}\right)=0 \), then we have
and (8.7.17) becomes
So, writing out (8.7.18) explicitly, we have, starting with C1 on the real axis (where z = x),
Our next task is the evaluation of the C2, C3, and C4 integrals. On C2 we have z = T + iy (dz = idy), on C3 we have z = x + iπ (dz = dx), and on C4 we have z = −T + iy (dz = idy). In turn, then:
On C2,
On C4,
On C3,
or, as eiπ = e−iπ = −1,
Putting these results into (8.7.19), with T → ∞, we have
Or, replacing cos(x) with complex exponentials,
and so, cancelling the common \( \left({\mathrm{e}}^{-\frac{\uppi}{2}}+{\mathrm{e}}^{\frac{\uppi}{2}}\right) \) factors,
Finally, equating real parts on each side of the equality,Footnote 15
or,
or, as \( {\int}_{-\infty}^{\infty }=2{\int}_0^{\infty } \)since the integrand is even, we at last have our answer:
This equals 0.31301008281… and MATLAB agrees, as integral(@(x)cos(x)./(exp(x) + exp(−x)),0,inf) = 0.31301008281… .
8.8 Singularities and the Residue Theorem
In this section we’ll derive the wonderful residue theorem , which will reduce what appear to be astoundingly difficult definite integrals to ones being simply of ‘routine’ status. We start with a f(z) that is analytic everywhere in some region R in the complex plane except at the point z = z0 which is a singularity of order m ≥ 1. That is,
where g(z) is analytic throughout R. Because g(z) is analytic we know it is ‘well-behaved,’ which is math-lingo for ‘all the derivatives of g(z) exist.’ (Take a look back at (8.7.1) and the comment that follows it.) That means g(z) has a Taylor series expansion (discussed in more detail in the next chapter, after (9.2.18)) about z = z0 and so we can write
Putting (8.8.2) into (8.8.1) gives
or, as it is usually written,
where in the second sum all the bn = 0 for n > m.
The unique existence of a Taylor series expansion for every distinct, infinitely differentiable function was long an accepted mathematical fact. That is, it was until 1823, when Cauchy gave an astounding counterexample. Consider what has to be the most trivial function that one can imagine: f(x) = 0 for all x. The Taylor series expansion for any differentiable function around x = 0 is \( \mathrm{f}\left(\mathrm{x}\right)={\sum \limits}_{\mathrm{k}=0}^{\infty}\frac{{\mathrm{x}}^{\mathrm{k}}}{\mathrm{k}!}{\mathrm{f}}^{\left(\mathrm{k}\right)}(0) \), where f(k)(x) is the kth derivative of the function. For the trivial f(x) = 0 every one of those derivatives is obviously zero. Cauchy’s counterexample was the demonstration of another function that is clearly not identically zero and yet has every one of its infinity of derivatives at x = 0 also equal to zero! Cauchy’s function, \( \mathrm{f}\left(\mathrm{x}\right)={\mathrm{e}}^{-\frac{1}{{\mathrm{x}}^2}} \), is easy to differentiate, and you should be able to convince yourself that, for any k, f(k)(x) is a polynomial in \( \frac{1}{\mathrm{x}} \), multiplied by \( {\mathrm{e}}^{-\frac{1}{{\mathrm{x}}^2}} \), and that \( {\mathrm{x}}^{-\mathrm{k}}{\mathrm{e}}^{-\frac{1}{{\mathrm{x}}^2}}\to 0 \) as x → 0 for all values of k. None of this will cause any trouble for anything we do in this book, but you should know that things are not quite so benign as I may have led you to believe.
The series expansion in (8.8.3) of f(z), an expansion about a singular point that involves both positive and negative powers of (z − z0), is called the Laurent series of f(z), named after the French mathematician Pierre Alphonse Laurent (1813–1854) who developed it in 1843. (In books dealing with complex analysis in far more detail than I am doing here, it is shown that the Laurent series expansion is unique.) We can find formulas for the an and bn coefficients in (8.8.3) as follows. Begin by observing that if k is any integer (negative, zero, or positive), then if C is a circle of radius ρ centered on z0 (which means that on C we have z = z0 + ρeiθ), then
As long as k ≠ −1 this last expression is 0. If, on the other hand, k = −1 our expression becomes the indeterminate \( \frac{0}{0} \). To get around that, for k = −1 simply back-up a couple of steps and write
That is, for k any integer,
So, to find a particular a-coefficient (say, aj) in the Laurent series for f(z), simply divide through (8.8.3) by (z − z0)j + 1 and integrate term-by-term. All of the integrals will vanish because of (8.8.4) with a single exception:
That is,
And to find a particular b-coefficient (say, bj), simply multiply by (z − z0)j−1 through (8.8.3) and integrate term-by-term. All of the integrals will vanish because of (8.8.4) with a single exception:
That is,
One of the true miracles of contour integration is that, of the potentially infinite number of coefficients given by the formulas (8.8.5) and (8.8.6), only one will be of interest to us. That chosen one is b1 and here’s why. If we set j = 1 in (8.8.6) thenFootnote 16
which is almost precisely (to within a factor of 2πi) the ultimate quest of our calculations, the determination of
But of course we don’t do the integral to find b1 (if we could directly do the integral, who cares about b1?!), but rather we reverse the process and calculate b1 by some means other than integration and then use that result in (8.8.7) to find the integral. The value of b1 is called the residue of f(z) at the singularity z = z0.
What does ‘some means other than integration’ mean? As it turns out, it is not at all difficult to get our hands on b1. Let’s suppose (as we did at the start of this section) that f(z) has a singularity of order m. That is, writing-out (8.8.3) in just a bit more detail,
So, multiplying through by (z − z0)m gives
Next, differentiate with respect to z a total of m − 1 times. That has three effects: (1) all the a-coefficient terms will retain a factor of (z − z0) to at least the first power; (2) the b1 term will be multiplied by (m − 1)!, but will have no factor involving (z − z0); and (3) all the other b-coefficient terms will be differentiated to zero. Thus, if we then let z → z0 the a-coefficient terms will vanish and we’ll be left with nothing but (m − 1) ! b1. Therefore,
where z0 is a m-order singularity of f(z).
For a first-order singularity (m = 1) the formula in (8.8.8) reduces, with the interpretation of \( \frac{{\mathrm{d}}^{\mathrm{m}-1}}{{\mathrm{d}\mathrm{z}}^{\mathrm{m}-1}}=1 \) if m = 1, to
Alternatively, write
where, as before, g(z) is analytic at the singularity z = z0 (which is then, of course, a first-order zero of h(z)). That is,
Then,
where the denominator (which you no doubt recognize as the definition of the derivative of h(z)) on the far-right follows because h(z) = h(z) − h(z0) because h(z0) = 0. So,
That is, the residue for a first-order singularity at z = z0 in the integrand \( \mathrm{f}\left(\mathrm{z}\right)=\frac{\mathrm{g}\left(\mathrm{z}\right)}{\mathrm{h}\left(\mathrm{z}\right)} \) can be computed as
I’ll show you an example of the use of (8.8.9) in the next section of this chapter.
Sometimes you can use other ‘tricks’ to get the residue of a singularity. Here’s one that directly uses the Laurent series without requiring any differentiation at all. Let’s calculate
for k an even positive integer. (The integral is, of course, zero for k an odd integer because the cosine is symmetrical about the θ-axis over the interval 0 to 2π and so bounds zero area.) Using (8.7.12) again, on the unit circle C we have
and dz = iz dθ. So, let’s study the contour integral
Here we have a singularity at z = 0 of order m = k + 1. That can be a lot of differentiations, using (8.8.8), if k is a large number!
An easy way to get the residue of this high-order singularity is to use the binomial theorem to expand the integrand as
which is a series expansion in negative and positive powers of z around the singular point at zero. It must be, that is, the Laurent expansion of the integrand (remember, such expansions are unique) from which we can literally read off the residue (the coefficient of the z−1 term). Setting 2j − k − 1 = −1, we find that \( \mathrm{j}=\frac{\mathrm{k}}{2} \) and so the residue is
Thus,
and so, from (8.8.10),
For example, if k = 18 the integral is equal to 1.165346… and MATLAB agrees because integral(@(x)cos(x).^18,0,2*pi) = 1.165346… .
Next, let’s do an example using (8.8.8). In (3.4.8) we derived the result (where a and b are each a positive constant, with a > b)
which is equivalent to
because cos(θ) from π to 2π simply runs through the same values it does from 0 to π. Now, suppose we set a = 1 and write b = k < 1. Then (3.4.8) says
This result might prompt one to ‘up the ante’ and ask for the value of
If we take C to be the unit circle centered on the origin, then on C we have as in the previous section that z = eiθ and so, from Euler’s identity, we can write
So, on C we have
and therefore
Also, as before,
and so
All this suggests that we consider the contour integral
We see that the integrand has two singularities, and that each is second-order. That is, m = 2 and the singularities are at
Since k < 1 the singularity at
is outside C, while the singularity at
is inside C. That is, z02 is the only singularity for which we need to compute the residue as given by (8.8.8).
So, with m = 2, that residue is
Since
we have
which, after just a bit of algebra that I’ll let you confirm, reduces to
Then, finally, we let \( \mathrm{z}\to {\mathrm{z}}_{02}=\frac{-1+\sqrt{1-{\mathrm{k}}^2}}{\mathrm{k}} \) and so
Thus,
and so
That is,
For example, if \( \mathrm{k}=\frac{1}{2} \) then our result is 9.67359…, and MATLAB agrees because integral(@(x)1./(1 + 0.5*cos(x)).^2,0,2*pi) = 9.67359… .
To finish this section, I’ll now formally state what we’ve been doing all through it: if f(z) is analytic on and inside contour C with the exception of N singularities, and if Rj is the residue of the j-th singularity, then
This is the famous residue theorem. For each singularity we’ll pick-up a contribution to the integral of 2πi times the residue of that singularity, with the residue calculated according to (8.8.8), or (8.8.9) if m = 1, using the value of m that goes with each singularity. That’s it! In the next (and final) section of this chapter I’ll show you an example of (8.8.13) applied to an integral that has one additional complication we haven’t yet encountered.
8.9 Integrals with Multi-Valued Integrands
All of the wonderful power of contour integration comes from the theorems that tell us what happens when we travel once around a closed path in the complex plane. The theorems apply only for paths that are closed. I emphasize this point—particularly the word closed—because there is a subtle way in which closure can fail, so subtle in fact that it is all too easy to miss. Recognizing the problem, and then understanding the way to get around it, leads to the important concepts of branch cuts and branch points in the complex plane.
There are numerous examples that one could give of how false closed paths can occur, but the classic one involves integrands containing the logarithmic function. Writing the complex variable as we did in (8.3.3) as z = reiθ, we have log(z) = ln (z) = ln (reiθ) = ln (r) + iθ, 0 ≤ θ < 2π. Notice, carefully, the ≤ sign to the left of θ but that it is the strict < sign on the right. As was pointed out in Sect. 8.3, θ is not uniquely determined, as we can add (or subtract) any multiple of 2π from θ and still seemingly be talking about the same physical point in the complex plane. That is, we should really write log(z) = ln (r) + i(θ ± 2πn), 0 ≤ θ < 2π, n = 0, 1, 2, … . The logarithmic function is said to be multi-valued as we loop endlessly around the origin. The mathematical problem we run into with this more complete formulation of the logarithmic function is that it is not continuous on any path that crosses the positive real axis! Here’s why.
Consider a point z = z0 on the positive real axis. At that point, r = x0 and θ = 0. But the imaginary part of log(z) is not a continuous function of z at x0 because its value, in all tiny neighborhoods ‘just below’ the positive real axis at x0, is arbitrarily near 2π, not 0. The crucial implication of this failure of continuity is that the derivative of log(z) fails to exist as we cross the positive real axis, which means analyticity fails there, too. And that means all our wonderful integral theorems are out the window!
What is happening, geometrically, as we travel around what seems to be a closed circular path (starting at x0 and then winding around the origin) is that we do not return to the starting point x0. Rather, when we cross the positive real axis we enter a new branch of the log function. An everyday example of this occurs when you travel along a spiral path in a multi-level garage looking for a parking space and move from one level (branch) to the next level (another branch) of the garage.Footnote 17 Your spiral path ‘looks closed’ to an observer on the roof looking downward (just like you looking down on your math paper as you draw what seems to be a closed contour in a flat complex plane), but your parking garage trajectory is not closed. And neither is that apparently ‘closed’ contour. There is no problem for your car with this, of course, but it seems to be a fatal problem for our integral theorems.
Or, perhaps not. Remember the old saying: “If your head hurts because you’re banging it on the wall, then stop banging your head on the wall!” We have the same situation here: “If crossing the positive real axis blows-up the integral theorems, well then, don’t cross the positive real axis.” What we need to do here, when constructing a contour involving the logarithmic function, is to simply avoid crossing the positive real axis. What we’ll do, instead, is label the positive real axis, from the origin out to plus-infinity, as a so-called branch cut (the end-points of the cut, x = 0 and x = +∞, are called branch points ), and then avoid crossing that line. Any contour that we draw satisfying this restriction is absolutely guaranteed to be closed (that is, to always remain on a single branch) and thus our integral theorems remain valid.
Another commonly encountered multi-valued function that presents the same problem is the fractional power zp = rpeipθ, where −1 < p < 1 and, as before, we take 0 ≤ θ < 2π. Suppose, for example, we have the function \( \sqrt{\mathrm{z}} \) and so \( \mathrm{p}=\frac{1}{2} \). Any point on the positive real axis has θ = 0, but in a tiny neighborhood ‘just below’ the positive real axis the angle of z is arbitrarily near to 2π and so the angle of \( \sqrt{\mathrm{z}} \) is \( \frac{2\uppi}{2}=\uppi \). That is, on the positive real axis the function value at a point is \( \sqrt{\mathrm{r}} \) while an arbitrarily tiny downward shift of the point into the fourth quadrant gives a function value of \( \sqrt{\mathrm{r}}{\mathrm{e}}^{i\uppi}=-\sqrt{\mathrm{r}} \). The function value is not continuous across the positive real axis. The solution for handling zp is, again, to define the positive real axis as a branch cut and to avoid using a contour C that crosses that cut.
The fact that I’ve taken 0 ≤ θ < 2π is the reason the branch cut is along the positive real axis. If, instead, I’d taken −π < θ ≤ π we would have run into the failure of continuity problem as we crossed the negative real axis, and in that case we would simply make the negative real axis the branch cut and avoid using any C crossing it. In both cases z = 0 would be the branch point. Indeed, in the examples I’ve discussed here we could pick any direction we wish, starting at z = 0, draw a straight from there out to infinity, and call that our branch cut.
Let’s see how this all works. For the final calculation of this chapter, using these ideas, I’ll evaluate
where a and b are constants. We’ve already done two special cases of (8.9.1). In (1.5.1), for a = 0 and b = 1, we found that
and in (2.1.3) we generalized this just a bit to the case of arbitrary b:
In (8.9.1) we’ll now allow a, too, to have any non-negative value. The contour C we’ll use is shown in Fig. 8.9.1, which you’ll notice avoids crossing the branch cut (the positive real axis), as well as circling around the branch point at the origin. This insures that C lies entirely on a single branch of the logarithmic function, and so C is truly closed and the conditions for Cauchy’s integral theorems remain satisfied.
The contour C consists of four parts, where ρ and R are the radii of the small (C4) and large (C2) circular portions, respectively, and ε is a small positive angle:
on C1: z = reiε, dz = eiεdr, ρ < r < R;
on C2: z = Reiθ, dz = iReiθdθ, ε < θ < 2π − ε;
on C3: z = rei(2π − ε), dz = ei(2π − ε)dr, R > r > ρ;
on C4: z = ρeiθ, dz = iρeiθdθ, 2π − ε > θ > ε.
We will, eventually, let ρ → 0, R → ∞ , and ε → 0.
Our integrand will be
and you are almost surely wondering why the numerator is ln(z) squared? Why not just ln(z)? The answer, which I think isn’t obvious until you do the calculations, is that if we use just ln(z) we won’t get the value of (8.9.1), but rather that of a different integral. If we use ln(z) squared, however, we will get the value of (8.9.1). I’ll now show you how it all goes with ln(z) squared, and you should verify my comments about using just ln(z).
The integrand has three singularities: one at z = 0 (the branch point) where the numerator blows-up, and two at z = −a ± ib where the denominator vanishes. Only the last two are inside C as ρ and ε each go to zero, and as R goes to infinity, and each is first-order. From the residue theorem, (8.8.13), we therefore have
where R1 is the residue of the first-order singularity at z = −a + ib and R2 is the residue of the first-order singularity at z = −a − ib. As we showed in (8.8.9), the residue of a first-order singularity at z = z0 in the integrand function
is given by
For our problem,
and
Since
then, as
and
we have
and
Since a and b are both non-negative, the −a + ib singularity is in the second quadrant, and the −a − ib singularity is in the third quadrant. In polar form, then, the second quadrant singularity is at
and the third quadrant singularity is at
Therefore,
and
Thus, 2πi times the sum of the residues is
So, for the f(z) in (8.9.2) and the C in Fig. 8.9.1, we have
Based on our earlier experiences, we expect our final result is going to come from the C1 and C3 integrals because, as we let ρ, ε, and R go to their limiting values (of 0, 0, and ∞, respectively), we expect the C2 and C4 integrals will each vanish. To see that this is, indeed, the case, let’s do the C2 and C4 integrals first. For the C2 integral we have
Now, as R → ∞ consider the expression in the left-most square-brackets in the integrand. The numerator blows-up like ln2(R) for any given θ in the integration interval, while the denominator blows-up like R2. That is, the left-most square-brackets behave like \( \frac{\ln^2\left(\mathrm{R}\right)}{{\mathrm{R}}^2} \). The expression in the right-most square-brackets blows-up like R. Thus, the integrand behaves like
and so the C2 integral behaves like
as R → ∞. Now
which is, of course, indeterminate, and so let’s use L’Hospital’s rule:
So, our expectation of the vanishing of the C2 integral is justified.
Turning next to the C4 integral, we have
As ρ → 0 the expression in the left-most square-brackets in the integrand behaves like \( \frac{\ln^2\left(\uprho \right)}{{\mathrm{a}}^2+{\mathrm{b}}^2} \) while the expression in the right-most square-brackets behaves like ρ. So, the C4 integral behaves like
as ρ → 0. Now,
Define \( \mathrm{u}=\frac{1}{\uprho} \). Then, as ρ → 0 we have u → ∞ and so
which we’ve just shown (in the C2 integral analysis) goes to zero. So, our expectation of the vanishing of the C4 integral is also justified.
Turning our attention at last to the C1 and C3 integrals, we have
or, as ρ → 0, R → ∞ , and ε → 0,
(Notice, carefully, how the ln2(r) terms cancel in these last calculations, leaving just ln(r) in the final expression.) Inserting these results into (8.9.3), we have
Equating real parts, we get
which shouldn’t really be a surprise.Footnote 18(This is the lone integral you’ll get if you use just ln(z) instead of ln(z) squared in (8.9.2). Try it and see.) Equating imaginary parts is what gives us our prize for all this work:
This reduces to our earlier results for particular values of a and b. To see (8.9.4) in action, if both a and b equal 1 (for example) then
and MATLAB agrees, as integral(@(x)log(x)./((x + 1).^2 + 1),0,inf) = 0.272198… .
8.10 A Final Calculation
In the new Preface for this edition I promised you a ‘routine methods’ derivation of (8.6.9), as an example of how an integral done by contour integration can (if one is simply clever enough) be done using less powerful means. This is a point I made in the original edition (see Challenge Problems 8.7 and 8.8), but there my point was that a freshman calculus solution might be the easier of the two approaches. Here I’ll show you that, yes, we can indeed get (8.6.9) with just freshman calculus ideas, but it’s a toss-up value judgement if that is, in some sense, the ‘easier’ approach.
Our analysis starts with Euler’s gamma function integral in (4.1.1), that I’ll rewrite here:
Now, as a preliminary calculation, we are going to find Γ′(a), the derivative of Γ(a) with respect to a (using Feynman’s favorite trick of differentiating an integral with respect to a parameter—in this case, a). To be sure this calculation is crystal clear, I’ll write (8.10.1) as
Thus,
or, changing the dummy variable of integration from x to s (which of course changes nothing but it will help keep the notation absolutely clear and free of confusion as I reference other results from earlier in the book), we have
Next, remembering the result we derived in (3.3.3),
we set p = 1 and q = s to get
Putting (8.10.3) into (8.10.2), we have
or, reversing the order of integration,
and so, remembering (8.10.1),
Now, concentrate your attention on the inner, right-most integral, and change variable to u = s(1 + z). Thus, \( \mathrm{s}=\frac{\mathrm{u}}{1+\mathrm{z}} \) and \( \mathrm{ds}=\frac{\mathrm{du}}{1+\mathrm{z}} \), and therefore
Using this in (8.10.4),
or,
You’ll recall from Sect. 5.4 that we called Γ′(a)/Γ(a) the digamma function (see note 6 in Chapter 5, and Challenge Problem 8.9), which I’ll write here as
If we now change variable to 1 + z = ey, we have \( \frac{\mathrm{dz}}{\mathrm{dy}}={\mathrm{e}}^{\mathrm{y}} \) or, dz = eydy. Also, z = ey − 1 and so y = 0 when z = 0 and y = ∞ when z = ∞. Thus, the right-most integral in (8.10.6) is
and (8.10.6) becomes (using x as the dummy variable of integration))
Okay, at last, we are ready to tackle the integral we are after, which I’ll write as
If we make the change of variable y = −x in the first integral on the right (and so dx = −dy), we have
and so
Now, let’s do something perhaps just a bit unexpected. Let’s add and subtract \( {\int}_0^{\infty}\frac{{\mathrm{e}}^{-\mathrm{x}}}{\mathrm{x}}\mathrm{dx} \) to the right-hand-side of (8.10.8), which of course changes nothing. This gives us
If you compare the two expressions in the two pairs of curly brackets in (8.10.9) with the expression for ψ(a) in (8.10.7), you see that the first expression on the right in (8.10.9) is ψ(1 − a), while the second expression on the right in (8.10.9) is ψ(a). That is,
We can get our hands on ψ(1 − a) − ψ(a) from the reflection formula for the gamma function, derived in (4.2.16). That is, as we showed there,
and so, taking logarithms on both sides,
and then differentiating with respect to a,
or,
Putting (8.10.11) into (8.10.10) gives us our result, the one we derived in (8.6.9) using contour integration:
After all of this, perhaps contour integration doesn’t look quite so difficult anymore!
Historical Note: For many years the general feeling among mathematicians was that there were some integrals that were, ironically, simply ‘too complex’ to be evaluated by complex variables (contour integration). The probability integral \( {\int}_{-\infty}^{\infty }{\mathrm{e}}^{-{\mathrm{x}}^2}\mathrm{dx} \) was the usual case put forth in support of that view. Then, in 1947, the British mathematician James Cadwell (1915–1982) published a brief note (The Mathematical Gazette, October) where he showed how to do that integral by performing two successive contour integrations, one around a rectangle, followed by another around a pie-shaped sector of a circle. Shortly after that, the Russian-born English mathematician Leon Mirsky (1918–1983) showed how to do it in an even shorter note (The Mathematical Gazette, December 1949) using just a single contour (a parallelogram). In a footnote, however, Cadwell had noted that he had learned, since writing his original note, that the American-born British mathematician Louis Joel Mordell (1888–1972) had, decades earlier, already evaluated via contour methods the much more general \( {\int}_{-\infty}^{\infty}\frac{{\mathrm{e}}^{{\mathrm{at}}^2+\mathrm{bt}}}{{\mathrm{e}}^{\mathrm{ct}}+\mathrm{d}}\mathrm{dt} \) which reduces to the probability integral for the special case of a = 1 and b = c = d = 0. You can find Mordell’s quite difficult (in my opinion) analysis in the Quarterly Journal of Pure and Applied Mathematics (48) 1920, pp. 329–342. The prevailing view today is that any integral that can be done, can be cracked using either real or complex methods.
8.11 Challenge Problems
(C8.1)
Suppose f(z) is analytic everywhere in some region R in the complex plane, with an m-th order zero at z = z0. That is, f(z) = g(z)(z − z0)m, where g(z) is analytic everywhere in R. Let C be any simple, closed CCW contour in R that encircles z0. Explain why
(C8.2)
Back in Challenge Problem C3.9 I asked you to accept that \( {\int}_0^{\infty}\frac{\sin \left(\mathrm{mx}\right)}{\mathrm{x}\left({\mathrm{x}}^2+{\mathrm{a}}^2\right)}\mathrm{dx}=\frac{\uppi}{2}\left(\frac{1-{\mathrm{e}}^{-\mathrm{am}}}{{\mathrm{a}}^2}\right) \) for a > 0, m > 0. Here you are to derive this result using contour integration. Hint: Notice that since the integrand is even, \( {\int}_0^{\infty }=\frac{1}{2}{\int}_{-\infty}^{\infty }. \) Use \( \mathrm{f}\left(\mathrm{z}\right)=\frac{{\mathrm{e}}^{\mathrm{imz}}}{\mathrm{z}\left({\mathrm{z}}^2+{\mathrm{a}}^2\right)} \), notice where the singularities are (this should suggest to you the appropriate contour to integrate around) and then, at some point, think about taking an imaginary part.
(C8.3)
Derive the following integration formulas:
-
(a)
\( {\int}_0^{2\uppi}\frac{\mathrm{d}\uptheta}{1-2\mathrm{a}\ \cos \left(\uptheta \right)+{\mathrm{a}}^2}=\frac{2\uppi}{1-{\mathrm{a}}^2},0<\mathrm{a}<1; \)
-
(b)
\( {\int}_{-\infty}^{\infty}\frac{\cos \left(\mathrm{x}\right)}{{\left(\mathrm{x}+\mathrm{a}\right)}^2+{\mathrm{b}}^2}\mathrm{dx}=\frac{\uppi}{\mathrm{b}}{\mathrm{e}}^{-\mathrm{b}}\cos \left(\mathrm{a}\right)\ \mathrm{a}\mathrm{nd}\ {\int}_{-\infty}^{\infty}\frac{\sin \left(\mathrm{x}\right)}{{\left(\mathrm{x}+\mathrm{a}\right)}^2+{\mathrm{b}}^2}\mathrm{dx}=-\frac{\uppi}{\mathrm{b}}{\mathrm{e}}^{-\mathrm{b}}\sin \left(\mathrm{a}\right),\kern0.75em \mathrm{a}>0,\mathrm{b}>0; \)
-
(c)
\( {\int}_{-\infty}^{\infty}\frac{\cos \left(\mathrm{x}\right)}{\ \left({\mathrm{x}}^2+{\mathrm{a}}^2\right)\ \left({\mathrm{x}}^2+{\mathrm{b}}^2\right)}\mathrm{dx}=\frac{\uppi}{{\mathrm{a}}^2-{\mathrm{b}}^2}\left(\frac{{\mathrm{e}}^{-\mathrm{b}}}{\mathrm{b}}-\frac{{\mathrm{e}}^{-\mathrm{a}}}{\mathrm{a}}\right),\mathrm{a}>\mathrm{b}>0; \)
-
(d)
\( {\int}_0^{\infty}\frac{\cos \left(\mathrm{ax}\right)}{\kern0.5em {\left({\mathrm{x}}^2+{\mathrm{b}}^2\right)}^2}\mathrm{dx}=\frac{\uppi}{4\ {\mathrm{b}}^3}\left(1+\mathrm{ab}\right){\mathrm{e}}^{-\mathrm{ab}},\mathrm{a}>0,\mathrm{b}>0. \)
In (a), use the approach of Sect. 8.3 to convert the integral into a contour integration around the unit circle. In (b), (c), and (d), use the contour in Fig. 8.7.1.
(C8.4)
Using the contour in Fig. 8.9.1, show that \( {\int}_0^{\infty}\frac{{\mathrm{x}}^{\mathrm{k}}}{{\left({\mathrm{x}}^2+1\right)}^2}\mathrm{dx}=\frac{\uppi \left(1-\mathrm{k}\right)}{4\cos \left(\frac{\mathrm{k}\uppi}{2}\right)},-1<\mathrm{k}<3 \). Before doing any calculations, explain the limits on k. Hint: Use \( \mathrm{f}\left(\mathrm{z}\right)=\frac{{\mathrm{z}}^{\mathrm{k}}}{{\left({\mathrm{z}}^2+1\right)}^2} \), notice that the singularities at z = ± i are both second-order, and write \( {\mathrm{z}}^{\mathrm{k}}={\mathrm{e}}^{\ln \left({\mathrm{z}}^{\mathrm{k}}\right)}={\mathrm{e}}^{\mathrm{k}\ \ln \left(\mathrm{z}\right)} \).
(C8.5)
Show that \( {\int}_{-\infty}^{\infty}\frac{\cos \left(\mathrm{mx}\right)}{{\mathrm{ax}}^2+\mathrm{bx}+\mathrm{c}}\mathrm{dx}=-2\uppi \frac{\cos \left(\frac{\mathrm{m}\mathrm{b}}{2\mathrm{a}}\right)\sin \left(\frac{\mathrm{m}\sqrt{{\mathrm{b}}^2-4\mathrm{ac}}}{2\mathrm{a}}\right)}{\sqrt{{\mathrm{b}}^2-4\mathrm{ac}}} \) when b2 > 4ac. Notice that this result contains (8.6.5) as the special case of m = 0.
(C8.6)
Show that \( {\int}_0^{\infty}\frac{{\mathrm{x}}^{\mathrm{p}}}{\left(\mathrm{x}\kern0.5em +1\right)\left(\mathrm{x}\kern0.5em +2\right)}\mathrm{dx}=\left({2}^{\mathrm{p}}-1\right)\frac{\uppi}{\sin \left(\mathrm{p}\uppi \right)} \), −1 < p < 1. For \( \mathrm{p}=\frac{1}{2} \) this is \( \left(\sqrt{2}-1\right)\uppi =1.30129\dots \), and MATLAB agrees as integral(@(x)sqrt(x)./((x + 1).*(x + 2)),0,inf) = 1.30129… . Use the contour in Fig. 8.9.1.
(C8.7)
In his excellent 1935 book An Introduction to the Theory of Functions of a Complex Variable, Edward Copson (1901–1980), who was professor of mathematics at the University of St. Andrews in Scotland, wrote “A definite integral which can be evaluated by Cauchy’s method of residues can always be evaluated by other means, though generally not so simply.” Here’s an example of what Copson meant, an integral attributed to the great Cauchy himself. It is easily done with contour integration, but would (I think) otherwise be pretty darn tough: show that \( {\int}_0^{\infty}\frac{{\mathrm{e}}^{\cos \left(\mathrm{x}\right)}\ \sin \left\{\sin \left(\mathrm{x}\right)\right\}}{\mathrm{x}}\mathrm{dx}=\frac{\uppi}{2}\left(\mathrm{e}-1\right) \). MATLAB agrees with Cauchy, as this is 2.69907… and integral(@(x)exp(cos(x)).*sin(sin(x))./x,0,1e6) = 2.69595… . Hint: Look back at how we derived (8.6.4)—in particular the contour in Fig. 8.6.1—and try to construct the proper f(z) to integrate on that contour. (In the new Preface for this edition you’ll recall that there is a freshman calculus derivation of Cauchy’s integral that is, in fact, much easier to do than is the contour integration.)
(C8.8)
Here’s an example of an integral that Copson himself assigned as an end-of-chapter problem to be done by contour integration and residues, but which is actually easier to do by freshman calculus: show that \( {\int}_{-\infty}^{\infty}\frac{{\mathrm{x}}^2}{{\left({\mathrm{x}}^2+{\mathrm{a}}^2\right)}^3}\mathrm{dx}=\frac{\uppi}{8{\mathrm{a}}^3},\mathrm{a}>0 \). The two singularities in the integrand are each third-order and, while not a really terribly difficult computation (you should do it), here’s a simpler and more general approach. You are to fill-in the missing details.
(a) Start with \( {\int}_{-\infty}^{\infty}\frac{{\mathrm{x}}^2}{\left({\mathrm{x}}^2+{\mathrm{a}}^2\right)\left({\mathrm{x}}^2+{\mathrm{b}}^2\right)}\mathrm{dx} \), with a ≠ b, make a partial fraction expansion, and do the resulting two easy integrals; (b) let b → a and so arrive at the value for \( {\int}_{-\infty}^{\infty}\frac{{\mathrm{x}}^2}{{\left({\mathrm{x}}^2+{\mathrm{a}}^2\right)}^2}\mathrm{dx} \); (c) finally, use Feynman’s favorite trick of differentiating an integral to get Copson’s answer. Notice that you can now continue to differentiate endlessly to calculate \( {\int}_{-\infty}^{\infty}\frac{{\mathrm{x}}^2}{{\left({\mathrm{x}}^2+{\mathrm{a}}^2\right)}^{\mathrm{n}}}\mathrm{dx} \) for any n > 3 you wish.
(C8.9)
In Sect. 5.4 we argued that the derivative of the gamma function Γ(x) at x = 1 is Γ′(1) = −γ, where γ is Euler’s constant. Show that the integral form of the digamma function derived in (8.10.6) is consistent with that assertion. That is, show, since Γ(1) = 0 ! = 1, that \( \uppsi (1)=\frac{\Gamma^{\prime }(1)}{\Gamma (1)}={\Gamma}^{\prime }(1)={\int}_0^{\infty}\frac{{\mathrm{e}}^{-\mathrm{z}}}{\mathrm{z}}\mathrm{dz}-{\int}_0^{\infty}\frac{\mathrm{dz}}{\left(1+\mathrm{z}\right)\mathrm{z}}=-\upgamma \). Hint: Do the first integral by-parts, and the second one with a partial fraction expansion, with both integrals written as \( {\lim}_{\varepsilon \to 0}{\int}_{\varepsilon}^{\infty } \). This limiting operation is necessary because the two integrals, individually, diverge, but you’ll find that their difference is finite. And don’t forget (5.4.3).
The final four challenge problems of this chapter are not contour integrals, themselves, but rather are included here because they show how our earlier contour integrations can be further manipulated to give us even more quite interesting results using just standard freshman calculus techniques.
(C8.10)
Suppose m and n are non-negative real numbers such that n > m + 1. Show that \( {\int}_0^{\infty}\frac{{\mathrm{x}}^{\mathrm{m}}}{1-{\mathrm{x}}^{\mathrm{n}}}\mathrm{dx}=\frac{\uppi /\mathrm{n}}{\tan \left(\frac{\mathrm{m}+1}{\mathrm{n}}\uppi \right)} \). Hint: Start with (8.6.9), that is with \( {\int}_{-\infty}^{\infty}\frac{{\mathrm{e}}^{\mathrm{ay}}}{1-{\mathrm{e}}^{\mathrm{y}}}\mathrm{dy}=\frac{\uppi}{\tan \left(\mathrm{a}\uppi \right)} \) with 0 < a < 1, and make the change of variable ey = xn.
(C8.11)
Show that\( {\int}_0^{\infty}\frac{{\mathrm{x}}^{\mathrm{m}}}{{\mathrm{x}}^{\mathrm{n}}+\mathrm{b}}\mathrm{dx}=\frac{\uppi}{{\mathrm{n}\mathrm{b}}^{\left(\mathrm{n}-\mathrm{m}-1\right)/\mathrm{n}}\sin \left\{\frac{\left(\mathrm{m}+1\right)}{\mathrm{n}}\uppi \right\}} \). Hint: Try the change of variable x = ya1/n in (8.7.8) (and then, a bit later, you’ll find b = 1/a helpful).
(C8.12)
Show that \( {\int}_0^{\infty}\frac{{\mathrm{x}}^{\mathrm{a}}}{{\left(\mathrm{x}+\mathrm{b}\right)}^2}\mathrm{dx}={\mathrm{b}}^{\mathrm{a}-1}\frac{\uppi \mathrm{a}}{\sin \left(\uppi \mathrm{a}\right)} \). Hint: Using the result from C8.11, with m = a and n = 1, apply Feynman’s trick of differentiating with respect to a parameter (in this case, b).
(C8.13)
Show that \( {\int}_0^{\infty}\frac{\sqrt{\mathrm{x}}\ \ln \left(\mathrm{x}\right)}{{\left(\mathrm{x}+\mathrm{b}\right)}^2}\mathrm{dx}=\left[1+\frac{1}{2}\ln \left(\mathrm{b}\right)\right]\frac{\uppi}{\sqrt{\mathrm{b}}} \). Hint: Differentiate the result from C8.12 with respect to a, and then set \( \mathrm{a}=\frac{1}{2} \). This is a generalization of an integral that appeared in an 1896 textbook by the French mathematician Félix Tisserand (1845–1896), who did the b = 1 special case.
Notes
- 1.
Jeremy Gray, The Real and the Complex: a history of analysis in the nineteenth century, Springer 2015, pp. 59–60.
- 2.
In his book Complex Analysis: Fundamentals of the Classical Theory of Functions, Birkhäuser 1998, p. 120.
- 3.
In keeping with the casual approach I’m taking in this book, I’ll just assume that these two limits exist and then we’ll see where that assumption takes us. Eventually we’ll arrive at a new way to do definite integrals (contour integration) and then we’ll check our assumption by seeing if our theoretical calculations agree with MATLAB’s direct numerical evaluations.
- 4.
There are, of course, two distinct ways we can have A = B. The trivial way is if C simply has zero length, which immediately says Ix = Iy = 0. The non-trivial way is if C goes from A out into the plane, wanders around for a while, and then returns to A (which we re-label as B). It is this second way that gives us a closed loop.
- 5.
If, instead, we had started with f(z) = z3 = (reiθ)3 = r3ei3θ = r3{cos(3θ) + i sin (3θ)} = r3{cos(θ) + i sin (θ)}3, then we could have just as easily have derived the triple-angle formulas that are not so easy to get by other means (just take a look at any high school trigonometry text).
- 6.
The C-R equations had, in fact , been known before either Cauchy or Riemann had been born, as the result of studies in hydrodynamics (see Gray, note 1, p. 60).
- 7.
The word finite is important: f(z) = z blows-up as ∣z ∣ → ∞ and so f(z) is not said to be analytic at infinity. In fact, there is a theorem in complex function theory that says the only functions that are analytic over the entire complex plane, even at infinity, are constants. In those cases all four partial derivatives in the C-R equations are identically zero.
- 8.
See, for example, Joseph Bak and Donald J. Newman, Complex Analysis (third edition), Springer 2010, pp. 35–40. While the C-R equations alone are not sufficient for analyticity, if the partial derivatives in them are continuous then we do have sufficiency.
- 9.
If the function f(z) is analytic everywhere in some region except for a finite number of singularities, mathematicians say f(z) is meromorphic in that region and I tell you this simply so you won’t be paralyzed by fear if you should ever come across that term.
- 10.
For the interesting history of this theorem, named after the English mathematician George Green (1793–1841), see my An Imaginary Tale, Princeton 2010, pp. 204–205.
- 11.
Can you show this? If not, go back and read Sect. 1.6 again.
- 12.
Because ∣eiTcos(θ)∣ = 1 for all T, and \( {\lim}_{\mathrm{T}\to \infty}\left|\frac{{\mathrm{T}}^2+\mathrm{aT}{\mathrm{e}}^{i\uptheta}}{{\mathrm{T}}^2+\mathrm{aT}\left({\mathrm{e}}^{i\uptheta}+{\mathrm{e}}^{-i\uptheta}\right)+{\mathrm{a}}^2}\right|=1 \).
- 13.
The singularity in (8.7.1) is called first-order because it appears to the first power. By extension, \( \frac{\mathrm{f}\left(\mathrm{z}\right)}{{\left(\mathrm{z}-{\mathrm{z}}_0\right)}^2} \) has a second-order singularity, and so on. I’ll say much more about high-order singularities in the next section.
- 14.
The limits on a are because, first, since n − m ≥ 2 it follows that m + 1 ≤ n − 1 and so a < 1. Also, for x ≪ 1 the integrand in (8.7.9) behaves as xa − 1 which integrates to \( \frac{{\mathrm{x}}^{\mathrm{a}}}{\mathrm{a}} \) and this blows-up at the lower limit of integration if a < 0. So, 0 < a.
- 15.
If you equate imaginary parts you get \( {\int}_{-\infty}^{\infty}\frac{\sin \left(\mathrm{x}\right)}{{\mathrm{e}}^{\mathrm{x}}+{\mathrm{e}}^{-\mathrm{x}}}\mathrm{dx}=0 \), which is trivially true since the integrand is odd.
- 16.
- 17.
Each of these branches exists for each new interval of θ of width 2π, with each branch lying on what is called a Riemann surface The logarithmic function has an infinite number of branches, and so an infinite number of Riemann surfaces. The surface for 0 ≤ θ < 2π is what we observe as the usual complex plane (the entry level of our parking garage). The concept of the Riemann surface is a very deep one, and my comments here are meant only to give you an ‘elementary geometric feel’ for it.
- 18.
You should be able to show that this result immediately follows from the indefinite integral \( \int \frac{\mathrm{du}}{{\mathrm{u}}^2+{\mathrm{b}}^2}=\frac{1}{\mathrm{b}}{\tan}^{-1}\left(\frac{\mathrm{u}}{\mathrm{b}}\right) \) followed by the change of variable u = x + a.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Nahin, P.J. (2020). Contour Integration. In: Inside Interesting Integrals. Undergraduate Lecture Notes in Physics. Springer, Cham. https://doi.org/10.1007/978-3-030-43788-6_8
Download citation
DOI: https://doi.org/10.1007/978-3-030-43788-6_8
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-43787-9
Online ISBN: 978-3-030-43788-6
eBook Packages: Physics and AstronomyPhysics and Astronomy (R0)