Finite Temperature Field Theories: Review

  • Antal Jakovác
  • András Patkós
Part of the Lecture Notes in Physics book series (LNP, volume 912)


In this section, we briefly summarize our basic knowledge of quantum field theories, required for the investigation of equilibrium (and eventually nonequilibrium) features of quantum systems with a very large (infinite) number of degrees of freedom. The best representation of the dynamical variables are fields with a Lagrangian describing local interactions among them. The thus defined field theory can then be quantized and eventually put in a heat bath.


Spectral Function Ward Identity External Current Path Integral Representation Retarded Propagator 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

This introductory section very concisely summarizes the most important features of the process of constructing quantum field theories at zero and nonzero temperatures. There are excellent books and monographs in the literature where the interested reader can find further details. Without trying to be complete, we give here some basic references. For general questions on quantum field theory, one can make use of several books [1, 2, 3]. For the more specialized problems of finite-temperature field theory, a very useful review can be found in [4], and two more recent books are also available [5] and [6]. It is often useful to think about a field theory as the continuum limit of a theory defined on a spacetime lattice: an excellent book on this topic was written by Montvay and Münster [7]. Our review of renormalization in the next chapter is largely based on the book of Collins [8]. Beyond these basic works, there are numerous excellent books and papers on more specific subjects that we are going to quote in the text.

Throughout these notes, field variables are defined in spacetime points x of flat d-dimensional Minkowskian or Euclidean spacetime M isomorphic to R d . It is often useful to have an observer splitting the spacetime into time and space extensions, denoted by x = (t, x); here x is a (d − 1)-dimensional vector.

2.1 Review of Classical Field Theory

In classical field theory, the basic object is the field A: M → V, where V is some vector space. The state of the system at time t is characterized by the configuration A(t, x). The dynamics of the system is determined by the action S, which is a real-valued functional of the field configurations: S: AR. The classically realizable time evolutions are solutions of the equation of motion (EoM) manifesting themselves as the extrema of the action δ Sδ A = 0; these solutions are also called physical configurations. In local field theories, the action can be written as an integral over the Lagrangian density, \(S =\int d^{d}x\,\mathcal{L}\). In the fundamental field theories, the Lagrangian density depends only on the field and its first derivatives: \(\mathcal{L}(A(x),\partial _{\mu }A(x))\). In this case, we can introduce the canonically conjugate field \(\varPi = \partial \mathcal{L}/\partial \dot{A}\).

We can apply to the configurations transformations represented by bijections U: AA′. The transformations of the system form a group. We will consider only pointwise transformations, where the transformed new field at a given position depends on the original field at another position, i.e., A′(x′) = U V (A(x)), where x′ = U M (x). The transformation is called internal if U M  = 1 (identity); otherwise, it is an external transformation. We often encounter transformations determined by continuously varying parameters, i.e., when a certain subgroup of all transformations forms a Lie group. In this case, the transformation U can be characterized by a set of parameters c ∈ R n . In the standard representation, the transformation is written as \(U(c) = e^{-ic_{a}T_{a}}\), where T a are the generators of the transformation.

The transformation is a symmetry if S[A′] = S[A]. This also means that if A is an extremum of S (i.e., realizable motion), then A′ is an extremum, too. In several cases, we expect that the system exhibits some abstractly defined symmetry group. In this case, the fields are classified as irreducible representations of the abstract group, and S is built up from the group invariants constructed from the field products.

An important theorem (Noether’s theorem) states that if U is a continuous symmetry of the system, then a conserved current associated with it always exists. The proof can be found in all basic field theory textbooks [8], but for the later use of its logic of construction, we present a brief overview. Consider a one-parameter continuous symmetry U τ , where U0 = 1. The transformed field is A τ  = U τ (A), and the transformed action is \(S_{\tau }[A] = S[A_{\tau }]\). If the transformation is a symmetry, then S τ  = S. Now let us consider an infinitesimal but position-dependent parameter δ τ(x). Then two statements can be made. First, the change of the action can be written as
$$\displaystyle{ \delta S[A] = S_{\delta \tau }[A] - S[A] = -\int d^{d}xj^{\mu }(x)\partial _{\mu }\delta \tau, }$$
since at constant δ τ, it must be identically zero. Then, after partial integration, we obtain
$$\displaystyle{ \frac{\delta S} {\delta \tau (x)} = \partial _{\mu }j^{\mu }(x). }$$
On the other hand, A δ τ is the variation of the original configuration. Therefore, if at δ τ = 0 we began from the solution of the equations of motion (EoM), we obtain
$$\displaystyle{ \frac{\delta S} {\delta \tau (x)}\biggr |}_{A_{ph}} = 0. $$
These statements together mean that j μ is a conserved current when evaluated at solutions of EoM.
The conserved current associated with spacetime translations is the energy–momentum tensor
$$\displaystyle{ \mathcal{T}_{\mu \nu } = \partial _{\mu }A \frac{\partial \mathcal{L}} {\partial (\partial ^{\nu }A)} - g_{\mu \nu }\mathcal{L}, }$$
or its symmetrized version [9]. In particular, the energy density and momentum density read as
$$\displaystyle{ \varepsilon =\dot{ A}\varPi -\mathcal{L},\qquad P_{i} =\varPi \partial _{i}A }$$
(for fields carrying discrete indices, the product of two field variables implies summation over the indices). Another often used example is the current associated with a linear internal symmetry. In this case, the infinitesimal transformations can be written with the help of the generators T ij of the symmetry transformations as \(\delta A_{i} = -iT_{ij}A_{j}\delta \tau\) (where we have written indices explicitly for a more transparent result). Thus
$$\displaystyle{ j_{i\mu } = T_{ij} \frac{\partial \mathcal{L}} {\partial (\partial ^{\mu }A_{j})}. }$$

2.2 Quantization and Path Integral

The main difference between a classical and a quantum system is the characterization of each of their states. While in a classical system the state could be fully described by the configuration of the generalized coordinates and the canonically conjugated momenta, in the quantum system we introduce a separate Hilbert space H for the states. The physical states are Ψ[A] ∈ H of unit norm | Ψ[A] |  = 1.

The transformations of the quantum system form U: H → H isomorphisms. These are (anti)linear maps that map a state onto a state, i.e., it conserves the norm; therefore, the transformation must be (anti)unitary, \(U^{\dag } = U^{-1}\). In the continuous case, \(U_{c} = e^{-ic_{a}T_{a}}\), just as in the classical case; if U is unitary, then the generator T is Hermitian, \(T^{\dag } = T\).

Consider now a one-parameter subgroup \(U_{\tau } = e^{-i\tau T}\) of the linear transformations. The eigenstates of the transformation are eigenstates of the generator, too: if T Ψ = λ Ψ, then \(U\varPsi = e^{-i\tau \lambda }\varPsi\). The eigenvalues of T are real numbers; those of U lie on the unit circle. We can also describe the transformation of the system by transforming instead of the states, the operators \(\hat{O}\) acting on H (Heisenberg picture) requiring unchanged values for the inner products
$$\displaystyle{ \langle \tilde{\varPsi }'\vert \hat{O}\vert \varPsi '\rangle =\langle \tilde{\varPsi } \vert \hat{O}'\vert \varPsi \rangle \rightarrow \hat{ O}' = U_{\tau }^{\dag }\hat{O}U_{\tau }, }$$
or infinitesimally \(\delta O = i[T,\hat{O}]\delta \tau\).

The generator of a transformation does not change under the effect of the same transformation T′ = UTU = T. It makes it possible to identify the generator of a symmetry transformation as the conserved quantity belonging to this transformation: in field theory, this is the conserved quantity coming from Noether’s theorem. Therefore, the generators of the space and time translations are the Noether charges coming from the energy–momentum tensor, i.e., the conserved momentum and energy, respectively. After quantization, these become the momentum operator \(\hat{P}_{i}\) and Hamilton operator \(\hat{H}\), and so the space and time translations are \(e^{-i\hat{P}_{i}x^{i}/\hslash }\) and \(e^{-i\hat{H}t/\hslash }\), respectively (\(\hslash \) is introduced in order to have dimensionless quantities in the exponent).

The infinitesimal transformations of the system are interpreted as measurements. In an eigenstate, we obtain a definite value for the result of the measured quantity; in other cases, the projection of the arising state onto the eigenstates gives the probability amplitude of the possible outcomes of the measurement. Two measurements are not interchangeable if the corresponding generators do not commute.

In particular, in quantum field theories there is an operator that measures the field at spacetime position x, denoted by \(\hat{A}(x)\), and another one that measures the canonically conjugated momentum (\(\hat{\varPi }(x)\)). The generator of the space translations Π i is defined in any field theory with the help of the energy-momentum tensor as \(P_{i} = T_{0i} =\int dy\varPi (y)\partial _{i}A(y)\). On the other hand, the fields transform under an infinitesimal space translation as \(\delta A(x) = A(x +\delta x) - A(x) = \partial _{i}A\delta x^{i}\). When substituted into the relation expressing the action of an infinitesimal translation on the field A as \((i/\hslash )\delta x^{i}[P_{i},A(x)] =\delta x^{i}\partial _{i}A(x)\), it leads to the canonical commutation/anticommutation relations
$$\displaystyle\begin{array}{rcl} & & [\hat{A}(t,\mathbf{x}),\hat{\varPi }(t,\mathbf{y})]_{\alpha } = i\hslash \delta (\mathbf{x} -\mathbf{y}),\qquad [\hat{A}(t,\mathbf{x}),\hat{A}(t,\mathbf{y})]_{\alpha } = 0, \\ & & \qquad \qquad \qquad \qquad [\hat{\varPi }(t,\mathbf{x}),\hat{\varPi }(t,\mathbf{y})]_{\alpha } = 0, {}\end{array}$$
where α = ±1 and \([X,Y ]_{\alpha } = XY -\alpha Y X\), i.e., commutator for α = 1 and anticommutator for α = −1. In the following, \(\hslash = 1\) units are used. The choice α = 1 applies for bosons, and α = −1 for fermions.1

The eigenvectors of \(\hat{A}\) (and also those of \(\hat{\varPi }\)) form an orthonormal basis. In this basis, we can expand any state, and using the projectors on the eigenstates also any operator. In particular, we can expand the time translation operator \(e^{-i\hat{H}t}\).

For bosonic fields (denoted by Φ(x)), because of the commutation relations (2.8), there exists a common eigenstate for all field measurement operators at time t, and also (separately) for all canonical momentum measurement operators:
$$\displaystyle{ \hat{\varPhi }(t,\mathbf{x})\left \vert \varphi \right \rangle =\varphi (t,\mathbf{x})\left \vert \varphi \right \rangle,\qquad \hat{\varPi }(t,\mathbf{x})\left \vert \pi \right \rangle =\pi (t,\mathbf{x})\left \vert \pi \right \rangle. }$$
Their inner product, using again (2.8), is
$$\displaystyle{ \left \langle \varphi \vert \pi \right \rangle = e^{i\int \!d^{d}\mathbf{x}\,\pi (\mathbf{x})\varphi (\mathbf{x}) }. }$$
Now we can consider a time interval [0, t], which we divide into N equal parts, with each part of length Δ t = tN. At each division point \(r = 0,\mathop{\ldots },N\) (also at the beginning and at the end), we insert complete sets \(\mathbf{1} =\int (\prod _{\mathbf{x}}d\varphi _{r}(\mathbf{x}))\left \vert \varphi _{r}\right \rangle \left \langle \varphi _{r}\right \vert \) and \(\mathbf{1} =\int (\prod _{\mathbf{x}}d\pi _{r}(\mathbf{x}))\left \vert \pi _{r}\right \rangle \left \langle \pi _{r}\right \vert \) in alternating order. After a bit of algebra, we obtain
$$\displaystyle{ e^{-i\hat{H}t} =\int \mathcal{D}\varphi \mathcal{D}\pi e^{iS[\varphi,\pi ]}\left \vert \varphi _{ N}\right \rangle \left \langle \varphi _{0}\right \vert, }$$
$$\displaystyle\begin{array}{rcl} & & \mathcal{D}\varphi =\prod _{r,\mathbf{x}}d\varphi _{r}(\mathbf{x}),\quad \mathcal{D}\pi =\prod _{r,\mathbf{x}}d\pi _{r}(\mathbf{x}),\quad S[\varphi,\pi ] =\int \! dt'\,\left [\pi \dot{\varphi }-H(\varphi,\pi )\right ], \\ & & \dot{\varphi }_{r}\varDelta t =\varphi _{r+1} -\varphi _{r},\qquad H(\varphi,\pi ) = \frac{\left \langle \varphi \vert H\vert \pi \right \rangle } {\left \langle \varphi \vert \pi \right \rangle }. {}\end{array}$$

In the case of fermions, the situation is more complicated, because the field measurement operators \(\hat{\varPsi }(t,\mathbf{x})\) do not commute at a given time. To resolve the problem, one first introduces the Grassmann algebra \(\mathcal{G}\). It is a (graded) complex algebra generated by the unity and elements e i for which the algebraic product is antisymmetric \(e_{i}e_{j} = -e_{j}e_{i}\). For the physical applications, one also needs a star operation that connects the generator elements into pairs ∗: e → e. We can define a Hilbert space over the Grassmann algebra \(H(\mathcal{G})\), which means that if \(\left \vert \psi _{1}\right \rangle,\left \vert \psi _{2}\right \rangle \in H(\mathcal{G})\) and \(g_{1},g_{2} \in \mathcal{G}\), then \(g_{1}\left \vert \psi _{1}\right \rangle + g_{2}\left \vert \psi _{2}\right \rangle \in H(\mathcal{G})\). The scalar product is Grassmann-algebra-valued, \(\left \langle \psi _{1}\vert \psi _{2}\right \rangle \in \mathcal{G}\). In the physical applications, one introduces two Grassmann algebra generators for each spatial position and for each fermionic component \(e_{\alpha,\mathbf{x}}\) and eα, x. The field operators \(\hat{\varPsi }(t,\mathbf{x})\) and \(\hat{\varPsi }^{\dag }(t,\mathbf{x})\) act linearly on \(H(\mathcal{G})\), i.e., \(\hat{\varPsi }(t,\mathbf{x})(g_{1}\left \vert \psi _{1}\right \rangle + g_{2}\left \vert \psi _{2}\right \rangle ) = g_{1}\hat{\varPsi }(t,\mathbf{x})\left \vert \psi _{1}\right \rangle + g_{2}\hat{\varPsi }(t,\mathbf{x})\left \vert \psi _{2}\right \rangle\), and similarly for \(\hat{\varPsi }^{\dag }(t,\mathbf{x})\).

Let us consider a single degree of freedom, where we have just two field operators \(\hat{\varPsi }\) and \(\hat{\varPsi }^{\dag }\). We assume, in agreement with most physical applications, that \(\hat{\varPi }= i\hat{\varPsi }^{\dag }\), which implies
$$\displaystyle{ [\hat{\varPsi },\hat{\varPsi }^{\dag }]_{ +} = 1. }$$
The corresponding Grassmann algebra generators are e and e. The complete algebra is four-dimensional; it has the basis 1, e, e, and ee. We take an element from the Hilbert space \(\left \vert \psi \right \rangle \in H(\mathcal{G})\) and apply \(\hat{\varPsi }\) to it. We obtain
$$\displaystyle{ \left \vert \psi '\right \rangle =\hat{\varPsi } \left \vert \psi \right \rangle,\qquad \hat{\varPsi }\left \vert \psi '\right \rangle = 0. }$$
This means that either \(\left \vert \psi '\right \rangle = 0\) itself, or \(\hat{\varPsi }\left \vert \psi '\right \rangle = 0\). Therefore, there exists an element in the Hilbert space that is annihilated by \(\hat{\varPsi }\): this is the vacuum \(\left \vert 0\right \rangle\). Applying \(\hat{\varPsi }^{\dag }\) to the vacuum, we obtain
$$\displaystyle{ \left \vert 1\right \rangle =\hat{\varPsi } ^{\dag }\left \vert 0\right \rangle,\quad \hat{\varPsi }^{\dag }\left \vert 1\right \rangle = 0,\qquad \hat{\varPsi }\left \vert 1\right \rangle =\hat{\varPsi }\hat{\varPsi } ^{\dag }\left \vert 0\right \rangle = (1 +\hat{\varPsi } ^{\dag }\hat{\varPsi })\left \vert 0\right \rangle = \left \vert 0\right \rangle. }$$
Therefore, the Hilbert space is two-dimensional. The basis elements are \(\left \vert 0\right \rangle\) and \(\left \vert 1\right \rangle\).
Now we can define fermionic coherent states: we take a grade-1 Grassmann element, i.e., \(\eta =\alpha e +\beta e^{{\ast}}\). It is nilpotent, η2 = 0. The product ηη formed with the help of the conjugated element \(\eta ^{{\ast}} =\alpha ^{{\ast}}e^{{\ast}} +\beta ^{{\ast}}e\) is a commuting element of the algebra. The definition of the coherent state is
$$\displaystyle{ \left \vert \eta \right \rangle = e^{-\frac{1} {2} \eta ^{{\ast}}\eta +\eta \hat{\varPsi }^{\dag } }\left \vert 0\right \rangle = (1 -\frac{1} {2}\eta ^{{\ast}}\eta )\left \vert 0\right \rangle +\eta \left \vert 1\right \rangle. }$$
This is an eigenstate of \(\hat{\varPsi }\):
$$\displaystyle{ \hat{\varPsi }\left \vert \eta \right \rangle =\eta \left \vert 0\right \rangle =\eta \left \vert \eta \right \rangle, }$$
because of the nilpotency of η. The norm of \(\left \vert \eta \right \rangle\) is unity.
Two linearly independent coherent states would form a basis if there existed an inverse element in the Grassmann algebra. Though this is not the case here, we can introduce a formal integral over the Grassmann elements of grade 1 (Berezin integrals), which does the job for us:
$$\displaystyle{ \int d\eta = 0,\qquad \int d\eta \eta = 1. }$$
Then we obtain
$$\displaystyle{ \int \!d\eta ^{{\ast}}\!d\eta \,\left \vert \eta \right \rangle \!\left \langle \eta \right \vert = \mathbf{1}, }$$
$$\displaystyle{ \int \!d\eta ^{{\ast}}\!d\eta \,{\biggl [(1 -\eta ^{{\ast}}\eta )\left \vert 0\right \rangle \!\left \langle 0\right \vert +\eta \left \vert 1\right \rangle \!\left \langle 0\right \vert +\eta ^{{\ast}}\left \vert 0\right \rangle \!\left \langle 1\right \vert +\eta \eta ^{{\ast}}\left \vert 1\right \rangle \!\left \langle 1\right \vert \biggr ]} = \left \vert 0\right \rangle \!\left \langle 0\right \vert + \left \vert 1\right \rangle \!\left \langle 1\right \vert = \mathbf{1}. }$$
Similarly, we obtain
$$\displaystyle{ \int \!d\eta ^{{\ast}}\!d\eta \,\left \langle -\eta \vert \hat{O}\vert \eta \right \rangle =\mathop{ \text{Tr}}\hat{O}, }$$
$$\displaystyle\begin{array}{rcl} & & \int \!d\eta ^{{\ast}}\!d\eta \,\left [(1 -\eta ^{{\ast}}\eta )\langle 0\vert \hat{O}\vert 0\rangle -\eta ^{{\ast}}\langle 1\vert \hat{O}\vert 0\rangle +\eta \langle 0\vert \hat{O}\vert 1\rangle -\eta ^{{\ast}}\eta \langle 1\vert \hat{O}\vert 1\rangle \right ] \\ & & \quad =\langle 0\vert \hat{O}\vert 0\rangle +\langle 1\vert \hat{O}\vert 1\rangle. {}\end{array}$$
Note that for the trace, one has to evaluate an antisymmetric expectation value (2.21).

The case of a single degree of freedom discussed above can be taken over for more fermionic dynamical variables without any problem. The most important finding is the integral formula representing the spectral decomposition of the identity and the trace formula.

Now we can write explicitly the representation of the time evolution operator with the help of the fermionic integrals. Just as in the bosonic case, we split the time interval [0, t] into N parts and insert an identity at each of these division points. Finally, we obtain
$$\displaystyle{ e^{-i\hat{H}t} =\int \mathcal{D}\psi ^{\dag }\mathcal{D}\psi \;e^{iS[\psi ^{{\ast}},\psi ] }\;\left \vert \psi _{0}\right \rangle \left \langle \psi _{N}\right \vert, }$$
$$\displaystyle\begin{array}{rcl} & & \mathcal{D}\psi ^{\dag }\mathcal{D}\psi =\prod _{ n=1}^{N}d\psi _{ n}^{{\ast}}d\psi _{ n},\qquad \psi _{r+1} -\psi _{r} = \partial _{t}\psi _{r}\varDelta t, \\ & & S[\psi ^{{\ast}},\psi ] =\int dt\left (\psi ^{{\ast}}\dot{\psi }- H[\psi ^{{\ast}},\psi ]\right ),\qquad H[\psi ^{{\ast}},\psi ] = \left \langle \psi \vert H\vert \psi \right \rangle.{}\end{array}$$
We have obtained analogous formulas for both the bosonic and fermionic theories. Symbolically, we write in common notation
$$\displaystyle{ e^{-i\hat{H}t} =\int \mathcal{D}Ae^{iS[A]}\left \vert A_{ 0}\right \rangle \left \langle A_{N}\right \vert. }$$
Once we have a representation for the time evolution operator, we can work out any n-point correlation function. We are primarily interested in expectation values of the product of Heisenberg-picture operators:
$$\displaystyle{ \mathop{\text{Tr}}\hat{A}_{n}(t_{n})\ldots \hat{A}_{1}(t_{1})\hat{\varrho }, }$$
where \(\hat{\varrho }\) is the initial density operator defined at time t0. Using the cyclic property of the trace and the time evolution of the field operators \(\hat{A}(t) = e^{iH(t-t_{0})}\hat{A}e^{-iH(t-t_{0})}\), we have
$$\displaystyle{ \mathop{\text{Tr}}e^{-iH(t_{i}-t_{f})}e^{-iH(t_{f}-t_{n})}\hat{A}_{ n}e^{-iH(t_{n}-t_{n-1})}\ldots e^{-iH(t_{2}-t_{1})}\hat{A}_{ 1}e^{-iH(t_{1}-t_{0})}\hat{\varrho }e^{-iH(t_{0}-t_{i})}, }$$
where t i and t f are two arbitrary moments of time, practically \(t_{i} \rightarrow -\infty \) and \(t_{f} \rightarrow \infty \).
Now we can substitute the path integral representation of the time evolution operator. Between two time translations, there appear matrix elements of the \(\hat{A}\) operators evaluated between states sitting on the first point of the left side and the next point of the right side of the time evolution operator. We can write (also introducing a complete system with \(\left \vert \pi \right \rangle\) representation in the bosonic case)
$$\displaystyle\begin{array}{rcl} & & \frac{\langle \pi _{N}\vert \hat{\varPi }\vert \varphi _{0}\rangle } {\left \langle \pi _{N}\vert \varphi _{0}\right \rangle } =\pi _{N},\quad \frac{\langle \pi _{N}\vert \hat{\varPhi }\vert \varphi _{0}\rangle } {\left \langle \pi _{N}\vert \varphi _{0}\right \rangle } =\phi _{0}, \\ & & \frac{\langle \psi _{N}\vert \hat{\varPsi }\vert \psi _{0}\rangle } {\left \langle \psi _{N}\vert \psi _{0}\right \rangle } =\psi _{0},\quad \frac{\langle \psi _{N}\vert \hat{\varPsi }^{\dag }\vert \psi _{0}\rangle } {\left \langle \psi _{N}\vert \psi _{0}\right \rangle } =\psi _{ N}^{{\ast}}.{}\end{array}$$
We will also encounter the matrix elements of the density matrix
$$\displaystyle{ \varrho [A_{N},A_{0}] = \frac{\langle A_{N}\vert \hat{\varrho }\vert A_{0}\rangle } {\langle A_{N}\vert A_{0}\rangle }, }$$
which can be, of course, a rather complicated functional. In the Δ t → 0 limit, we will denote the arguments by A and \(\dot{A}\). We have finally, in symbolic notation,
$$\displaystyle{ \mathop{\text{Tr}}\hat{A}_{n}(x_{n})\ldots \hat{A}_{1}(x_{1})\hat{\varrho } =\int \limits _{ P/A}^{C\,path}\mathcal{D}A\,\,e^{i\int _{C}dt\mathcal{L}}A_{ n}(t_{n})\ldots A_{1}(t_{1})\varrho [A,\dot{A}]. }$$
Here PA means periodic∕antiperiodic contours: for bosons we must apply periodic, for fermions antiperiodic, boundary conditions. In this representation, the time flows along a closed time path (CTP) \(C: t_{i} \rightarrow t_{0} \rightarrow t_{1} \rightarrow \cdots \rightarrow t_{n} \rightarrow t_{f} \rightarrow t_{i}\). To emphasize the order of the time arguments along the contour, we can introduce a parameterization t(τ). In the “contour time” τ, the contour is monotonic and closed. The representation of the time evolution in fact uses operators that depend on the contour time, and we obtain contour time ordering in a natural way.

In real time, however, if the time arguments appear without any definite order, the contour runs back and forth several times. If for a certain section it is monotonic, those subsections can be merged into a common section; e.g., if \(t_{1} < t_{2} < t_{3}\), then we can write instead of \(t_{1} \rightarrow t_{2} \rightarrow t_{3}\) simply \(t_{1} \rightarrow t_{3}\). Moreover, we can always plug in a unit operator as \(e^{iH(t'-t)}e^{-iH(t'-t)}\), i.e., we can include a bypass anywhere in the time chain: instead of \(t_{n} \rightarrow t_{n+1}\), we can write \(t_{n} \rightarrow t' \rightarrow t_{n+1}\). According to these observations, the time contour can be standardized as \(t_{i} \rightarrow t_{f} \rightarrow t_{i} \rightarrow \cdots \rightarrow t_{f} \rightarrow t_{i}\), where a sufficient number of back-and-forth sections must be allowed for it to contain (taking into account the ordering) all the operators. This form depends on the operators only through their number. On the sections where the time flows as t i  → t f , the contour ordering is time ordering, while along the backward-running contours, the contour ordering is anti-time-ordering.

To simplify the treatment, one usually considers only two time sections (this is the minimal choice, since we always have to return to the same time t i ). The path integral variables are called \(A^{(1)}\) for the contour segment \(t_{i} \rightarrow t_{f}\), and A(2) for the section t f  → t i . In the contour ordering, segment 2 always follows segment 1, independently of the actual time values. On segment 1, the usual action
$$\displaystyle{ S =\int _{ t_{i}}^{t_{f} }\!dt\,L }$$
is used. On segment 2, where the orientation of the integration is t f  → t i , the action − S can be used with the same ordering of the endpoints as on segment 1. With this setup, we can study correlation functions like
$$\displaystyle{ F =\mathop{ \text{Tr}}\left [T^{{\ast}}(\hat{A}_{ n}(x_{n})\ldots \hat{A}_{k}(x_{k}))\,T(\hat{A}_{k-1}(x_{k-1})\ldots \hat{A}_{1}(x_{1}))\hat{\varrho }\right ], }$$
where T is anti-time-ordering, T is time-ordering. In this simplified setup, F has the following path integral representation:
$$\displaystyle{ F =\int \limits _{P/A}\mathcal{D}A\,e^{iS[A^{(1)}]-iS[A^{(2)}] }A_{n}^{(2)}(t_{ n})\ldots A_{1}^{(1)}(t_{ 1})\varrho [A^{(1)},\dot{A}^{(1)}], }$$
where \(\mathcal{D}A = \mathcal{D}A^{(1)}\mathcal{D}A^{(2)}\). This is the generic path integral representation for arbitrary initial conditions encoded into the density matrix.
The generating functional of the n-point functions can be written as
$$\displaystyle{ Z[J] =\int \limits _{P/A}\mathcal{D}A\,e^{iS[A^{(1)}]-iS[A^{(2)}]+\int \!JA }\varrho [A^{(1)},\dot{A}^{(1)}], }$$
where \(J = (J^{(1)},J^{(2)})\) and \(\int JA =\int (J^{(1)}A^{(1)} + J^{(2)}A^{(2)})\). Note that \(Z[J = 0] =\mathop{ \text{Tr}}\hat{\varrho } = 1\). An n-point correlation function can be obtained as
$$\displaystyle{ F = \left. \frac{\partial ^{n}Z[J]} {\partial J_{n}^{(2)}(x_{n})\ldots \partial J_{k}^{(2)}(x_{k})\partial J_{k-1}^{(1)}(x_{k-1})\ldots \partial J_{1}^{(1)}(x_{1})}\right \vert _{J=0}. }$$
In the case of fermions, all derivations must operate from the left.

2.3 Equilibrium

While for an arbitrary initial density matrix the above path integral is hardly accessible for practical use (but see [10]), we can have more definite yet still simple formulas if we are in equilibrium. There, the form of the density matrix is
$$\displaystyle{ \hat{\varrho }= \frac{1} {Z}_{0}e^{-\beta \hat{H}},\qquad Z_{ 0} =\mathop{ \text{Tr}}e^{-\beta \hat{H}}. }$$
Note that thermodynamics tells us that Z0 = eβ F, where F is the free energy of the system.
Now we can exploit that formally, this density matrix is similar to the time evolution operator with β → it, t → −i β. There we can use the representation discussed in the previous subsection. Choosing the time arguments (only their difference matters) \(t_{i} \rightarrow t_{i} - i\beta\), we have
$$\displaystyle{ e^{-\beta \hat{H}} =\int \mathcal{D}A\,e^{i\int _{t_{i}}^{t_{i}-i\beta }\!\!dt\,L }\left \vert A_{N}\right \rangle \left \langle A_{0}\right \vert. }$$
In the action, we can introduce the integration variable τ with the definition t = t i i τ, and define new field variables as \(A(t_{i} - i\tau ) = A^{(3)}(\tau )\). Then there appears
$$\displaystyle{ S_{E} = -i\!\!\!\int \limits _{t_{i}}^{t_{i}-i\beta }\!\!dt\,L[A(t),\partial _{ t}A(t)] = -\int \limits _{0}^{\beta }\!d\tau \,L[A^{(3)}(\tau ),i\partial _{\tau }A^{(3)}(\tau )]. }$$
The corresponding contour section continuously joins the end of segment 2, and it is labeled as segment 3. The time contour then goes out to complex time values, as can be seen in Fig. 2.1.
Fig. 2.1

Closed time path contour for equilibrium

For the generating functional, we introduce external currents for all the sections of the contour. Then it has the form
$$\displaystyle{ Z[J] = Z_{0}\left \langle e^{\int \!JA}\right \rangle =\mathop{ \text{Tr}}e^{-\beta \hat{H}}e^{\int \!J\hat{A}} =\int \limits _{ P/A}\mathcal{D}A\,e^{iS[A^{(1)}]-iS[A^{(2)}]-S_{ E}[A^{(3)}]+\int \!JA }, }$$
where now \(J = (J^{(1)},J^{(2)},J^{(3)})\). With this definition, in view of the second equality, the contribution to \(Z_{0} = Z[J = 0] =\mathop{ \text{Tr}}e^{-\beta \hat{H}}\) comes purely from segment 3. The free energy defined from Z[J] can be considered a free energy in the presence of an external time-dependent field.

The consequence of equilibrium is that we have time translation-invariance in the observables. Therefore, we can put the initial density matrix anywhere in time, even \(t_{i} \rightarrow -\infty \). But in this case, all propagation between contour sections C1, 2 and C3 is suppressed, since we would need infinitely long propagation. This means that segment 3 factorizes from segments 1 and 2. The formalism that uses exclusively segment 3 is called Euclidean, or imaginary, time or the Matsubara formalism. The theory built on contours 1 and 2 is called real time, or the Keldysh formalism.

If there is a conserved quantity in the system, i.e., if there is an operator \(\hat{N}\) that commutes with the Hamiltonian \([\hat{H},\hat{N}] = 0\), then we can associate a chemical potential with this operator. Then the density matrix reads as
$$\displaystyle{ \hat{\varrho }= \frac{1} {Z_{0}}e^{-\beta (\hat{H}-\mu \hat{N})},\qquad Z_{ 0} =\mathop{ \text{Tr}}e^{-\beta (\hat{H}-\mu \hat{N})}. }$$
Formally, the treatment of this system is equivalent to that presented above just involving a modified imaginary time evolution operator \(\hat{H}' =\hat{ H} -\mu \hat{ N}\), or in the path integral formalism with L′ = L +μ N.

2.4 Propagators

Let us consider the two-point functions in more detail. Path integration implies contour time ordering in the integrand, so we have several choices for two arbitrary local operators A and B:
$$\displaystyle{ iG_{AB}^{(ij)}(t) =\langle \text{T}_{ C}\,A^{(i)}(t)B^{(j)}(0)\rangle }$$
(here and below, the distinctive sign \(\hat{\ }\) will be omitted from the operators for the sake of notational simplicity). In terms of operator expectation values, we have
$$\displaystyle\begin{array}{rcl} & & iG_{AB}^{(12)}(t) =\alpha \frac{1} {Z_{0}}\mathop{ \text{Tr}}\left [e^{-\beta H}B(0)A(t)\right ] \\ & & iG_{AB}^{(21)}(t) = \frac{1} {Z_{0}}\mathop{ \text{Tr}}\left [e^{-\beta H}A(t)B(0)\right ] \\ & & iG_{AB}^{(11)}(t) = \frac{1} {Z_{0}}\mathop{ \text{Tr}}\left [e^{-\beta H}\text{T}A(t)B(0)\right ] =\varTheta (t)\,iG_{ AB}^{21}(t) +\varTheta (-t)\,iG_{ AB}^{12}(t) \\ & & iG_{AB}^{(22)}(t) = \frac{1} {Z_{0}}\mathop{ \text{Tr}}\left [e^{-\beta H}\text{T}^{{\ast}}A(t)B(0)\right ] =\varTheta (t)\,iG_{ AB}^{12}(t) +\varTheta (-t)\,iG_{ AB}^{21}(t) \\ & & G_{AB}^{(33)}(t) = \frac{1} {Z_{0}}\mathop{ \text{Tr}}\left [e^{-\beta H}\text{T}_{\tau }A(-i\tau )B(0)\right ] =\varTheta (\tau )\,iG_{ AB}^{21}(-i\tau ) +\varTheta (-\tau )\,iG_{ AB}^{12}(-i\tau ).{}\end{array}$$
In the case of nonzero chemical potential, we should use the replacement H → H′. Remark: for \(G_{AB}^{(33)}\), as is usual, we do not include the imaginary factor i in the definition.
The following identity is always true:
$$\displaystyle{ G_{AB}^{(11)} + G_{ AB}^{(22)} = G_{ AB}^{(12)} + G_{ AB}^{(21)}. }$$
To avoid the use of nonindependent quantities, it is sometimes useful to change to another basis, called a retarded/advanced (R/A) basis. We introduce the fields
$$\displaystyle{ A^{(r)} = \frac{A^{(1)} + A^{(2)}} {2},\qquad A^{(a)} = A^{(1)} - A^{(2)}. }$$
The propagators of the ordered products of A(r) and A(a) are expressed in terms of the previously defined contour-ordered quantities as follows:
$$\displaystyle\begin{array}{rcl} & & iG_{AB}^{(ra)}(t) = iG_{ AB}^{(11)}(t) - iG_{ AB}^{(12)}(t) =\varTheta (t)(iG_{ AB}^{21}(t) - iG_{ AB}^{12}(t)) \\ & & iG_{AB}^{(ar)}(t) = iG_{ AB}^{(11)}(t) - iG_{ AB}^{(21)}(t) = -\varTheta (-t)(iG_{ AB}^{21}(t) - iG_{ AB}^{12}(t)) \\ & & iG_{AB}^{(rr)}(t) = \frac{iG_{AB}^{21}(t) + iG_{ AB}^{12}(t)} {2}, {}\end{array}$$
while G AB (aa) = 0. The nonzero combinations are called the retarded, advanced, and Keldysh propagators, respectively.
As these equations show, all propagators can be expressed through G(21) and G(12); this is true also out of equilibrium. Moreover, in equilibrium, these two apparently independent propagator are also related through the Kubo–Martin–Schwinger (KMS) relation. Consider the 12 propagator of the A and B operators and use the cyclic property of the trace to obtain
$$\displaystyle{ iG_{AB}^{(12)}(t) =\alpha \frac{1} {Z_{0}}\mathop{ \text{Tr}}\left [e^{-\beta H'}B(0)A(t)\right ] =\alpha \frac{1} {Z_{0}}\mathop{ \text{Tr}}\left [e^{-\beta H'}e^{\beta H'}A(t)e^{-\beta H'}B(0)\right ]. }$$
We assume that the operator A has a definite charge with respect to the symmetry, i.e., [N, A] = qA. Then this implies \(e^{-\beta \mu N}Ae^{\beta \mu N} = e^{-\beta \mu q}A\). We also use the time translation \(e^{iHt'}A(t)e^{-iHt'} = A(t + t')\), applicable also to imaginary t′ = −i β values. Then we have
$$\displaystyle\begin{array}{rcl} e^{\beta H'}A(t)e^{-\beta H'}& =& e^{\beta H}e^{-\beta \mu N}A(t)e^{\beta \mu N}e^{-\beta H} \\ & =& e^{-\beta \mu q}e^{\beta H}A(t)e^{-\beta H} = e^{-\beta \mu q}A(t - i\beta ).{}\end{array}$$
$$\displaystyle{ iG_{AB}^{(12)}(t) =\alpha e^{-\beta \mu q}iG^{(21)}(t - i\beta ), }$$
which leads in Fourier space to the KMS relation:
$$\displaystyle{ iG_{AB}^{(12)}(\omega ) =\alpha e^{-\beta (\omega +\mu q)}iG^{(21)}(\omega ). }$$
Now all operators can be expressed solely through G(12) or G(21).
We can also define the spectral function \(\varrho\) as
$$\displaystyle{ \varrho _{AB}(x) = iG_{AB}^{(21)}(x) - iG_{ AB}^{(12)}(x). }$$
The spectral function has many advantageous properties, which we summarize in Appendix A: it satisfies sum rule, and under some conditions, it is a positive definite function of positive frequencies. Therefore, it is advantageous to choose the spectral function for the basic quantity in equilibrium thermodynamics, describing two-point correlations.
From the KMS relation (2.49), we obtain
$$\displaystyle{ iG_{AB}^{(12)}(k) =\alpha n_{\alpha }(k_{ 0} +\mu q)\varrho (k),\qquad iG_{AB}^{(21)}(k) = (1 +\alpha n_{\alpha }(k_{ 0} +\mu q))\varrho (k), }$$
$$\displaystyle{ n_{\alpha }(\omega ) = \frac{1} {e^{\beta \omega }-\alpha } }$$
is the Bose–Einstein (α = 1) or Fermi–Dirac (α = −1) distribution, respectively. We note that although this is a discussion of the full interacting theory, the same distribution function always appears, irrespective of the choice of the operators A and B. The distribution function satisfies the equality
$$\displaystyle{ n_{\alpha }(\omega ) + n_{\alpha }(-\omega )+\alpha = 0. }$$
The dynamical information is exclusively contained in the spectral function. The Keldysh propagator is expressed as
$$\displaystyle{ iG_{AB}^{(rr)}(k) = \left (\frac{1} {2} +\alpha n_{\alpha }(k_{0} +\mu q)\right )\varrho _{AB}(k). }$$
The proportionality function \((1/2 +\alpha n_{\alpha }(\omega ))\) is a purely (anti)symmetric function of ω.
The retarded and advanced propagators read as
$$\displaystyle{ iG_{AB}^{(ra)}(x) =\varTheta (t)\varrho _{ AB}(x),\qquad iG_{AB}^{(ar)}(x) = -\varTheta (-t)\varrho _{ AB}(x). }$$
In Fourier space, using \(\varTheta (\omega ) = i/(\omega +i\varepsilon )\vert _{\varepsilon \rightarrow 0^{+}}\), one derives the convolution
$$\displaystyle{ iG_{AB}^{(ra)/(ar)}(k) =\int \frac{d\omega } {2\pi } \frac{\varrho _{AB}(\omega,\mathbf{k})} {k_{0} -\omega \pm i\varepsilon }, }$$
which is of the form of a dispersion relation, called the Kramers–Kronig relation. The inverse relation is
$$\displaystyle{ \varrho _{AB}(k) =\mathop{ \text{Disc}}_{k_{0}}iG_{AB}^{(ra)}(k). }$$
Note that the discontinuity is equal to − 2 times the imaginary part of \(G_{AB}^{(ra)}\) only if the spectral function is real, i.e., when B = A.
Let us examine the imaginary time propagator, too. Since the imaginary contour runs in the interval [0, β], the range of the argument of the 33 propagator (which is the difference of two imaginary time values) is τ ∈ [−β, β]. For negative τ values, by the definition (2.42), we have
$$\displaystyle{ G_{AB}^{(33)}(-\tau +\beta ) = iG_{ AB}^{(21)}(i\tau - i\beta ) =\alpha iG_{ AB}^{(12)}(i\tau ) =\alpha G_{ AB}^{(33)}(-\tau ). }$$
With this relation we can extend the definition of the 33 propagator to the complete imaginary axis as an (anti)periodic function.
Since in the β > τ > 0 range, we have \(G^{(33)}(\tau ) = iG^{(21)}(-i\tau )\), it follows that
$$\displaystyle{ G_{AB}^{(33)}(\tau,\mathbf{k}) = \int \!\frac{d\omega } {2\pi }\,e^{-i(-i\tau )\omega }(1 +\alpha n_{\alpha }(\omega ))\varrho _{ AB}(\omega,\mathbf{k}) = \int \!\frac{d\omega } {2\pi }\,\frac{e^{(\beta -\tau )\omega }} {e^{\beta \omega }-\alpha }\varrho _{AB}(\omega,\mathbf{k}). }$$
We arrive at a particularly simple form if B = A is a bosonic operator and the spectral function depends only on k =  | k | . Then the negative frequency parts can be mapped to the positive frequency part, and
$$\displaystyle{ G_{AA}^{(33)}(\tau,k) =\int \limits _{ 0}^{\infty }\frac{d\omega } {2\pi } \frac{\cosh \left (\left ( \frac{\beta } {2}-\tau \right )\omega \right )} {\sinh \frac{\beta \omega } {2}} \varrho _{AA}(\omega,k). }$$
In the Fourier space, being periodic, G(33) in general has a discrete spectrum. We use the following definition for the Fourier transformation:
$$\displaystyle{ f(\omega _{n}) =\int \limits _{ 0}^{\beta }d\tau \,e^{i\omega _{n}\tau }f(\tau ),\qquad f(\tau ) = T\sum \limits _{n}e^{-i\omega _{n}\tau }f(\omega _{n}). }$$
The (anti)periodicity dictates the possible values of the frequency:
$$\displaystyle{ \omega _{n} = \left \{\begin{array}{ll} 2\pi nT\quad &\mathrm{bosons}\\ (2\pi + 1)nT\quad &\mathrm{fermions.}\\ \end{array} \right. }$$
The actual form of the Fourier transform reads as
$$\displaystyle{ G_{AB}^{(33)}(\omega _{ n},\mathbf{k}) =\int \limits _{ 0}^{\beta }d\tau \,e^{i\omega _{n}\tau }G_{AB}^{(33)}(\tau,\mathbf{k}) = \int \!\frac{d\omega } {2\pi }\,\frac{\varrho _{AB}(\omega,\mathbf{k})} {\omega -i\omega _{n}}. }$$
If we compare it with the Kramers–Kronig relation obtained for the retarded propagator, we obtain (k stands now for the four-vector of the momentum):
$$\displaystyle{ -G_{AB}^{(33)}(i\omega _{ n} \rightarrow k_{0} + i\varepsilon,\mathbf{k}) = G_{AB}^{ra}(k). }$$

2.5 Free Theories, Propagators, Free Energy

The path integral can be performed if the action is Gaussian. In order to unify the treatment of fermionic and bosonic fields, we write the Lagrangian density as
$$\displaystyle{ \mathcal{L}(x) = \frac{1} {2}\varPsi ^{T}(x)\mathcal{K}(i\partial )\varPsi (x). }$$
Here Ψ can be either a (multicomponent) bosonic field or a (multicomponent) fermionic field in the Nambu representation. The Nambu representation in terms of the original (Dirac) representation can be expressed as
$$\displaystyle{ \varPsi = \left (\begin{array}{c} \bar{\psi }^{T}\\ \psi \\ \end{array} \right ),\qquad \mathcal{K}(i\partial ) = \left (\begin{array}{cc} \quad 0\quad &\mathcal{K}_{D}(i\partial ) \\ -\mathcal{K}_{D}^{T}(-i\partial )& 0\\ \end{array} \right ). }$$
If the Ψ field satisfies some self-adjoint property, \(\varPsi ^{\dag } =\varPsi \varGamma _{0}\), then the Hermiticity of the action requires \(\mathcal{K} =\varGamma _{ 0}^{\dag }\mathcal{K}^{\dag }\varGamma _{0}^{{\ast}}\). The bosonic/fermionic nature dictates \(\mathcal{K}^{T}(-i\partial ) =\alpha \mathcal{K}(i\partial )\).
The operator EoM reads as
$$\displaystyle{ \mathcal{K}(i\partial )\varPsi (x) = 0, }$$
which implies that the spectral function \(\varrho =\varrho _{\varPsi \varPsi }\) also satisfies the equation
$$\displaystyle{ \mathcal{K}(i\partial )\varrho (x) = 0. }$$
In case of general relativistic theories with a single mass shell, there exists a differential operator D(i ∂) (Klein–Gordon divisor [4]), for which
$$\displaystyle{ \mathcal{K}(i\partial )D(i\partial ) = -\partial ^{2} - m^{2}, }$$
the Klein–Gordon operator. Then
$$\displaystyle{ \varrho (x) = D(i\partial )\varrho _{0}(x),\qquad \varrho (k) = D(k)\varrho _{0}(k), }$$
where \(\varrho _{0}(k)\) is the spectral function of the one-component bosonic field (Klein–Gordon field)
$$\displaystyle{ (\partial ^{2} + m^{2})\varrho _{ 0}(x) = 0,\qquad \varrho _{0}(t = 0,\mathbf{x}) = 0,\;\partial _{t}\varrho _{0}(t = 0,\mathbf{x}) = -i\delta (\mathbf{x}). }$$
This initial value problem has the following solution in real time:
$$\displaystyle{ \varrho _{0}(t,\mathbf{k}) = -i\frac{\sin \omega _{k}t} {\omega _{k}},\qquad \omega _{k}^{2} = \mathbf{k}^{2} + m^{2}; }$$
in the Fourier space, this implies
$$\displaystyle{ \varrho _{0}(k) = 2\pi \mathop{\text{sgn}}(k_{0})\delta (k^{2} - m^{2}) = \frac{2\pi } {2\omega _{k}}(\delta (k_{0}-\omega ) -\delta (k_{0}+\omega )). }$$
As an example we give the Klein–Gordon divisor for the fermion field in the Dirac representation and for the gauge field in R ξ gauges:
$$\displaystyle{ D(p) = p\!\!/ + m,\qquad D^{\mu \nu }(k) = -g^{\mu \nu } + (1-\xi )\frac{k^{\mu }k^{\nu }} {k^{2}}. }$$
We can also define the Euclidean version of the Klein–Gordon divisors as
$$\displaystyle{ D_{E}(p) = ip_{E\mu }\gamma _{\mu } + m,\qquad D_{E\mu \nu }(k) =\delta _{\mu \nu } - (1-\xi )\frac{k_{E\mu }k_{E\nu }} {k_{E}^{2}}, }$$
where \(\gamma _{E\mu } =\{\gamma _{0},i\gamma \}\) are self-adjoint 4 × 4 matrices.
Using the generic relations between the spectral functions and propagators, we can write in particular
$$\displaystyle\begin{array}{rcl} & & G^{(ra)}(k) = \frac{D(k)} {k^{2} - m^{2}}\biggr |_{k_{0}\rightarrow k_{0}+i\varepsilon },\qquad G^{(ar)}(k) = \frac{D(k)} {k^{2} - m^{2}}\biggr |_{k_{0}\rightarrow k_{0}-i\varepsilon }, \\ & & iG^{(12)}(k) =\alpha n_{\alpha }(k_{ 0})D(k)\varrho (k),\qquad G^{(21)}(k) = (1 +\alpha n_{\alpha }(k_{ 0}))D(k)\varrho (k), \\ & & G^{(33)}(k_{ E}) = \frac{D_{E}(k_{E})} {k_{E}^{2} + m^{2}}\qquad (k_{E0} = (2n + 1 -\varTheta (\alpha ))\pi T) {}\end{array}$$
Now let us calculate the path integral in the Gaussian case. By completing the square in the exponent of (2.39), we have schematically (using the Hermiticity of the kernel)
$$\displaystyle{ \frac{i} {2}\varPsi ^{T}\mathcal{K}\varPsi + J\varPsi = \frac{i} {2}\varPsi '^{T}\mathcal{K}\varPsi ' + \frac{i} {2}J^{T}\mathcal{K}^{-1}J, }$$
where \(\varPsi ' =\varPsi -i\mathcal{K}^{-1}J\). Then, by introducing a new integration variable Ψ → Ψ′, we have \(Z[J] = Z_{0}\,e^{\frac{i} {2} J^{T}\mathcal{K}^{-1}J }\). This form suggests that the generating functional is a Gaussian function of the currents. In the formal derivation, we would have to take into account carefully the boundary conditions when changing over to the new variable. But a shortcut can be taken: we know that the second functional derivative of the generating functional with respect to the currents yields the propagators. Therefore, consistency requires
$$\displaystyle{ Z[J] = Z_{0}\,e^{\frac{i} {2} \int \! \frac{d^{4}p} {(2\pi )^{4}} \, J^{T}(-p)G(p)J(p) },\qquad Z_{0} =\mathop{ \text{Tr}}e^{-\beta H}. }$$
We can also evaluate Z0 in the quadratic theory. We should take into account that it comes entirely from the Euclidean part (segment 3) of the time path. The Gaussian integral in the case of bosonic/fermionic variables yields
$$\displaystyle{ Z_{0} =\int \mathcal{D}\varPsi e^{-\frac{1} {2} \int \varPsi ^{\dag }\mathcal{K}_{ E}\varPsi } = (\det \mathcal{K}_{E})^{-\alpha /2}. }$$
The free energy is therefore
$$\displaystyle{ F = \frac{\alpha T} {2} \ln \det \mathcal{K}_{E} = \frac{\alpha V } {2} \int \! \frac{d^{4}p} {(2\pi )^{4}}\,\ln \det \mathcal{K}_{E}(p). }$$
For fermions, up to a constant, the integrand can be expressed through the Euclidean Dirac kernel as \(2\ln \det \mathcal{K}_{DE}(p)\) (see Eq. (2.66)).
To evaluate this expression formally, we derive a differential equation for it, by shifting first \(\mathcal{K}_{E} \rightarrow \mathcal{K}_{E} + a\), and then differentiating with respect to a. We have for the free energy density f = FV,
$$\displaystyle{ \frac{\partial f} {\partial a} = \frac{\alpha } {2}\int \! \frac{d^{4}p} {(2\pi )^{4}}\,\mathop{ \text{Tr}}(\mathcal{K}_{E} + a)^{-1} = \frac{\alpha } {2}\mathop{\text{Tr}}G_{a}^{(33)}(x = 0), }$$
where the a subscript in G a (33) reminds us that here we have to work with the shifted kernel. Using the definition of G(33) through G(12) and G(21) (cf. (2.42)), we can write the symmetric combination
$$\displaystyle{ \frac{\partial f} {\partial a} = \frac{\alpha } {2}\,\mathop{\text{Tr}}iG_{a}^{(rr)}(x = 0) = \frac{\alpha } {2}\int \! \frac{d^{4}p} {(2\pi )^{4}}\,\left (\frac{1} {2} +\alpha n_{\alpha }(p_{0}-\mu )\right )\varrho _{0a}(p)\mathop{\text{Tr}}D_{Ea}(p), }$$
where we also have taken into account chemical potential.
To proceed, it is worth beginning with a brief discussion of the bosonic and fermionic cases separately. In the bosonic case, the Klein–Gordon divisor D Ea is a projector, and the trace simply yields the dimension, i.e., the number of bosonic degrees of freedom N b . Here a has the meaning of a shift in the squared mass, so we have
$$\displaystyle{ \frac{\partial f_{b}} {\partial m^{2}} = \frac{N_{b}} {2} \int \! \frac{d^{4}p} {(2\pi )^{4}}\,\left (\frac{1} {2} + n_{+}(p_{0}-\mu )\right )\varrho _{0}(p), }$$
where \(\varrho _{0}\) is the Klein–Gordon spectral function with mass m.
For fermions, we do not have the 1∕2 in passing to the Dirac representation D DEa , but now a has the meaning of a (linear) mass term. The trace of the kernel is N f m. Therefore, the m2 derivative can be written
$$\displaystyle{ \frac{\partial f_{f}} {\partial m^{2}} = -\frac{N_{f}} {2} \int \! \frac{d^{4}p} {(2\pi )^{4}}\,\left (\frac{1} {2} - n_{-}(p_{0}-\mu )\right )\varrho _{0}(p). }$$
The two cases therefore yield a result that can be uniformly summarized as
$$\displaystyle\begin{array}{rcl} \frac{\partial f} {\partial m^{2}}& =& \frac{\alpha N} {2} \int \! \frac{d^{4}p} {(2\pi )^{4}}\,\left (\frac{1} {2} +\alpha n_{\alpha }(p_{0}-\mu )\right )\varrho _{0}(p) \\ & =& \frac{N} {2} \!\int \! \frac{d^{3}\mathbf{p}} {(2\pi )^{3}2\omega _{p}}\left (n_{\alpha }(\omega _{p}-\mu ) + n_{\alpha }(\omega _{p}+\mu )+\alpha \right ),{}\end{array}$$
where we used the free Klein–Gordon spectral function. In this expression, the last term is a constant not depending on the temperature. It has no thermodynamic meaning,2 and so we omit it. The rest can be integrated, and we use the physical condition \(f(m^{2} \rightarrow \infty ) = 0\) to obtain, eventually,
$$\displaystyle{ f = \frac{N} {2} [f(T,\mu ) + f(T,-\mu )],\qquad f(T,\mu ) =\alpha T\int \!\frac{d^{3}\mathbf{p}} {(2\pi )^{3}}\,\ln (1 -\alpha e^{-\beta (\omega _{p}-\mu )}). }$$
The two terms of this formula can be associated with the separate contributions of particles and their antiparticles, respectively. The number of degrees of freedom equals N∕2 for both. The chemical potential of the antiparticles has opposite sign relative to μ of the particles.
From the thermodynamic potential, one obtains the other thermodynamic quantities, too. The grand canonical potential f is connected to the internal energy density by
$$\displaystyle{ f =\varepsilon -Ts -\mu n = -p. }$$
The energy density and number density for one particle degree of freedom have the expressions
$$\displaystyle{ \varepsilon = \frac{\partial (\beta f)} {\partial \beta } = \int \! \frac{d^{3}\mathbf{p}} {(2\pi )^{3}}\,\omega _{p}n_{\alpha }(\omega _{p}-\mu ),\qquad n = -\frac{\partial f} {\partial \mu } = \int \! \frac{d^{3}\mathbf{p}} {(2\pi )^{3}}\,n_{\alpha }(\omega _{p}-\mu ). }$$

These expressions are finite unless α = 1 and | μ |  > m (the \(\vert \mu \vert \rightarrow m + 0\) case is still finite). In this pathological case, there appears a pole at \(p = \sqrt{\mu ^{2 } - m^{2}}\). Physically, this phenomenon is connected to the Bose–Einstein condensation: in response to trying to increase the number density of particles, the system generates a finite field condensate.

Some limiting cases are the following:
  • If T ≪ m and μ = 0, one has, for both the fermionic and bosonic cases,
    $$\displaystyle{ p = T\left (\frac{mT} {2\pi } \right )^{3/2}e^{-\beta m},\qquad \frac{\varepsilon } {p} = \frac{3} {2} + \frac{m} {T}. }$$
  • If T ≫ m, μ, one has
    $$\displaystyle{ \frac{p_{b}} {T^{4}} = \frac{\pi ^{2}} {90},\qquad p_{f} = \frac{7} {8}p_{b},\qquad \varepsilon = 3p. }$$

2.6 Perturbation Theory

If the action contains nonquadratic terms, then we must rely on some approximations for the evaluation of the path integral. Under the assumption that the quadratic (Gaussian) approximation captures the main physical features of the system, only slight nonlinear modifications are expected. Then one can use perturbation theory.

In conformity with this understanding, the action is split into a quadratic and a higher-order part,
$$\displaystyle{ S[\varPsi ] = S^{(2)}[A] + S_{ int}[A], }$$
and we write formally
$$\displaystyle{ S_{int}[A] =\int \! d^{d}x\,(-g)A_{ 1}(x)\ldots A_{v}(x) }$$
for the interaction part. In general, g could contain local operations such as derivations, which we leave hidden in this overview. We recognize next that the generating functional (2.39) can be written as an expectation value of the free theory:
$$\displaystyle{ Z[J] = Z_{00}\left \langle e^{iS_{int}[A]+\int JA}\right \rangle _{ 0}, }$$
where Z00 is the free partition function at J = 0. For perturbation theory, we expand the exponential in a Taylor series:
$$\displaystyle{ Z[J] = Z_{00}\sum \limits _{m=0}^{\infty } \frac{1} {m!}\left \langle (iS_{int}[A])^{m}e^{\int JA}\right \rangle _{ 0}. }$$
Formally, we can also rewrite it as
$$\displaystyle{ Z[J] = Z_{00}e^{iS_{int}[ \frac{\delta }{\delta J}]}\left \langle e^{\int JA}\right \rangle _{ 0} = e^{iS_{int}[ \frac{\delta }{\delta J}]}e^{\frac{i} {2} \int J^{T}GJ }. }$$
For a certain n-point function, we need to apply the required functional derivatives to this expression and eventually put J = 0 (cf. (2.35)). This leads to the Feynman rules for the computation of the n-point function \(\left \langle T_{c}A_{i_{1}}(p_{1})\ldots A_{i_{n}}(p_{n})\right \rangle\), where A i (p) are fundamental fields with momentum p and with a generic index (also including the Keldysh indices) i:
  • draw a graph with points and links (lines);

  • associate the points where only a single link ends (external legs) to the fields \(A_{i_{k}}(p_{k})\), \(k = 1\mathop{\ldots },n\), where n is the number of this type of point;

  • associate the points with v links to the piece in the interaction S int containing the product of v field operators. These points are called interaction vertices;

  • associate propagators to the links connecting any two points. A number of externally determined propagators enter into each vertex and generate virtual fields propagating along the internal lines of the diagram.

Draw all possible diagrams with n external fields and m vertices, and eventually evaluate them with help of the following rules:
  • external legs force the joining propagator to have momentum p k ;

  • a link connecting points with indices i and j is represented by iG(ij)(q);

  • a vertex having incoming momenta \(q_{1},\mathop{\ldots },q_{v}\) yields a contribution \(-ig(2\pi )^{d}\delta (\sum _{a=1}^{v}q_{a})\).

In the last step, one multiplies all these contributions, and an extra factor 1∕m! is included in the mth order of the perturbation theory. Finally, one evaluates the integrals over the momenta flowing through the internal links: \(\varPi _{i}\int \!\frac{d^{d}q_{ i}} {(2\pi )^{d}} \,\).

At the one-loop level, there are two diagrams that are the most important: the tadpole and the bubble diagrams; cf. Fig. 2.2. The details of their computation can be found in Appendix B.
Fig. 2.2

The tadpole and the bubble diagrams

2.7 Functional Methods

Perturbation theory can be made more efficient if we realize that the only quantities that need calculation are the loop integrals arising from the rules for setting up Feynman diagrams described in the last section. Therefore, it is worth developing a formalism that concentrates solely on them.

The first step is to introduce W[J], the generator of the connected diagrams:
$$\displaystyle{ Z[J] = e^{W[J]}. }$$
Since the generator of the imaginary time diagrams Z[J3] can be considered the partition function in the presence of an external field, it follows that − W[J3] is the free energy in the presence of an external field.
The generator W generates the connected diagrams, which can be proved by differentiating Z with respect to the current J x as follows. The differentiation yields a series of diagrams with one external leg. There is a portion of each diagram connected to the external leg; if we denote by \(\bar{W}\) the generator of the connected diagrams, this portion can be computed as \(\delta \bar{W}/\delta J_{x}\). The remainder, on the other hand, is the sum of all zero leg diagrams, i.e., just Z[J]. Therefore, we can write down the functional differential equation
$$\displaystyle{ \frac{\delta Z[J]} {\delta J_{x}} = \frac{\delta \bar{W}[J]} {\delta J_{x}} Z[J]. }$$
Its obvious solution is \(Z[J] = e^{\bar{W}[J]}\). Thus we have \(W =\bar{ W}\).
The connected n-point function is the (left) derivative of W at the zero external field. In particular, the propagator is calculated as
$$\displaystyle{ iG_{xy} = \frac{\partial ^{2}W[J]} {\partial J_{x}\partial J_{y}}. }$$
Using (2.93), we find that
$$\displaystyle{ W[J] = \frac{i} {2}J_{x}G_{xy}J_{y} -\frac{\alpha \beta V } {2} \mathop{\text{Tr}}\ln \mathcal{K}_{E} + \left \langle e^{\int \!JA}\left (e^{iS_{int}[A]} - 1\right )\right \rangle _{ conn}. }$$
Then the free energy reads
$$\displaystyle{ F[J] = \frac{\beta } {2}J_{x}G_{E,xy}J_{y} + F_{0} +\beta \left \langle e^{\int \!JA}\left (1 - e^{-S_{E,int}[A]}\right )\right \rangle _{ conn}, }$$
with F0 denoting the free energy of the free system evaluated in the previous section. Here W (or the free energy F) is a function(al) of the external current. The external current can be set by hand. This is the “tool” for controlling the system from outside; thermodynamically, the currents/sources correspond to the extensive variables. We may, however, want to express the dependence of the free energy on
$$\displaystyle{ \bar{A}_{x}[J] \equiv \left \langle A_{x}\right \rangle _{J} = \frac{\partial W} {\partial J_{x}}, }$$
where the subscript J is to emphasize that the expectation value is to be evaluated in the presence of the external current. This is a dynamical quantity, thermodynamically corresponding to the intensive variables. In thermodynamics, this type of change is realized by applying a Legendre transformation, which will be applied here, too.
Technically, the change of W under an infinitesimal change of J reads as \(\delta W =\delta J_{x}\bar{A}_{x}\) (the ordering is important in the case of fermionic theories). We define the associated potential via a Legendre transformation:
$$\displaystyle{ i\varGamma [\bar{A}] = W[J] -\int J_{x}\bar{A}_{x}. }$$
In the Euclidean case, we proceed by the replacement i Γ → −Γ E . Since − W E is the free energy expressed through J, Γ E is the free energy expressed through \(\bar{A}\). As can be easily seen, \(\delta i\varGamma = -J_{x}\delta \bar{A}_{x}\), or
$$\displaystyle{ \frac{\partial i\varGamma } {\partial \bar{A}_{x}} = -J_{x}. }$$
In the fermionic case, it is a right derivative.

The value of Γ thus defined has many interesting properties. Diagrammatically, \(\left \langle A_{x}\right \rangle _{J}\) is a one-point function. Fixing the value of \(\bar{A}\) means fixing the value of the one-point function. Therefore, we must ensure the vanishing of the perturbative corrections to the one-point function. This requirement is equivalent to the omission of all diagrams that contain a part connected to the rest by a single line. Such diagrams are called one-particle reducible diagrams. Those diagrams that remain connected after cutting a line form the class of one-particle irreducible (1PI) diagrams. Therefore, Γ can be interpreted as being the generator of the 1PI diagrams.

The physical expectation values must be computed at zero external current, which means that the physical value of \(\bar{A}\) can be determined as a solution of
$$\displaystyle{ \frac{\partial \varGamma } {\partial \bar{A}_{x}}\biggr |_{\bar{A}_{phys}} = 0. }$$
This is the same EoM as obtained in the classical case, except that one replaces the classical action by Γ. Therefore, the generator of the 1PI diagrams is also called the quantum effective action.
In the functional integral representation, we have
$$\displaystyle{ e^{i\varGamma [\bar{A}]} = e^{iW[J]-\int J\bar{A}} =\int \mathcal{D}A\,e^{iS[A]+\int J(A-\bar{A})} =\int \mathcal{D}a\,e^{iS[\bar{A}+a]+\int Ja}, }$$
where in the last equality, the integration variable is shifted to a by \(A =\bar{ A} + a\), and the full quantum field is split into the sum of the expectation value (mean field, background) \(\bar{A}\) and the fluctuations around it, a. This means that \(\left \langle a\right \rangle = \left \langle A\right \rangle -\bar{ A} = 0\), i.e., the expectation value of the fluctuations is zero.
In fact, the role of the external current is to ensure that the fluctuations have zero expectation value. This can equivalently be required by an extra condition as
$$\displaystyle{ e^{i\varGamma [\bar{A}]} =\int \mathcal{D}a\,e^{iS[a+\bar{A}]}\biggr |_{ \left \langle a\right \rangle =0}. }$$
This is the compact formulation of the background field method.
From this formula, it is also evident that if we require a = 0 instead of \(\left \langle a\right \rangle = 0\), i.e., no fluctuations are allowed at all, then Γ = S will be the result. This means that the quantum effective action to leading order is the classical action:
$$\displaystyle{ \varGamma = S +\mathrm{ quantum\ fluctuations}. }$$
Together with (2.104), this formula further supports the interpretation of Γ as the quantum effective action.

2.7.1 Two-Point Functions and Self-Energies

Let us investigate, in particular, the second derivative of Γ at the physical value. The 1PI quantum correction to the two-point function is called the self-energy \(i\varSigma\):
$$\displaystyle{ i\varGamma _{xy}^{(1,1)} = \frac{\partial ^{2}i\varGamma } {\partial \bar{A}_{x}\partial \bar{A}_{y}} = i\mathcal{K}_{xy} - i\varSigma _{xy}, }$$
where \(\mathcal{K}_{xy} = \frac{\partial ^{2}S} {\partial \bar{A}_{x}\partial \bar{A}_{y}}\) is the classical kernel. In the case of fermion fields, we have to apply one left and one right derivative here, which is indicated by the upper (double) index of the notation introduced on the left-hand side.
On differentiating (2.103) with respect to J from the left, we obtain
$$\displaystyle{ \frac{\partial ^{2}W} {\partial J_{x}\partial J_{z}} \frac{\partial ^{2}i\varGamma } {\partial \bar{A}_{z}\partial \bar{A}_{y}} = -\delta _{xy}. }$$
The second derivatives of W and Γ are therefore the inverse quantities of each other. We can write this expression as
$$\displaystyle{ G_{xz}(\mathcal{K}_{zy} -\varSigma _{zy}) = (\mathcal{K}_{xz} -\varSigma _{xz})G_{zy} =\delta _{xy}. }$$
In the Euclidean formalism, we define the self-energy as
$$\displaystyle{ (\mathcal{K}_{E} +\varSigma _{E}(p))G_{E}(p) =\delta _{xy}. }$$
Since \(\varSigma\) is the (quantum part of the) 1PI two-point function and G is the inverse of the full two-point function, (2.110) generates the iterative relation that tells how the 1PI diagrams should be resummed in order to give the full result. To leading order we have \(\varSigma = 0\), and so \(G_{0} = \mathcal{K}^{-1}\). With it we obtain formally
$$\displaystyle{ G = G_{0} + G_{0}\varSigma G = G_{0} + G\varSigma G_{0} = G_{0} + G_{0}\varSigma G_{0} + G_{0}\varSigma G_{0}\varSigma G_{0} + \cdots \,. }$$
This formula is sometimes called the Dyson–Schwinger (DS) series.
From this formula, a simple rule can be read off how to determine the self-energy. The first correction to the two-point function that is \(\delta \left \langle T_{C}AA\right \rangle = \left \langle T_{C}AA\right \rangle - iG_{0}\) can always be written formally as
$$\displaystyle{ \delta \left \langle T_{C}AA\right \rangle = iG_{0}\delta \left \langle T_{C}AA\right \rangle _{amp}iG_{0} = iG_{0}\varSigma G_{0} + \cdots \,, }$$
or in the Euclidean formalism,
$$\displaystyle{ \delta \langle T_{C}A^{(3)}A^{(3)}\rangle = G_{ 0}\delta \left \langle T_{C}A^{(3)}A^{(3)}\right \rangle _{ amp}G_{0} = -G_{0}\varSigma _{E}G_{0} + \cdots \,, }$$
which means
$$\displaystyle{ \varSigma = i\delta \left \langle T_{C}AA\right \rangle _{amp},\qquad \varSigma _{E} = -\delta \langle T_{C}A^{(3)}A^{(3)}\rangle _{ amp}. }$$
The quantity \(\delta \left \langle T_{C}AA\right \rangle _{amp}\) is the “amputated” two-point function, where we chop off the propagators belonging to the external lines.

Although it looks quite simple, (2.112) in fact avoids by resummation a series of individually increasingly divergent contributions. Namely, the free propagator exhibits a pole at p2 = m2 on the mass shell. Near the mass shell, we have \(p^{2} = m^{2} + x\), and the free propagator behaves as \(G_{0} \sim 1/x\). Therefore, the nth term in the DS series is proportional to 1∕x n . These terms are therefore more and more divergent at a finite value of the momentum: this is characteristic of an infrared (IR) divergence. The DS series resums the most relevant terms and provides the propagator G free of unphysical divergences. It also may have a pole, but that is already physical: no further diagrams can change it. This logic will be followed later in performing the resummation of IR divergent (IR sensitive) series.

The propagator is in general a matrix expressing the correlation of all possible field components at the same momentum (if we have spacetime translation-invariance). This holds for the self-energy, too. In particular, if we use the real-time formalism, then \(\varSigma\) is a matrix with Keldysh indices 1, 2. Omitting other components, the DS equation can be written as
$$\displaystyle{ \mathcal{K}(p)G^{(ij)}(p) = (-1)^{i+1}(\delta ^{ij} +\varSigma ^{(ik)}(p)G^{(kj)}(p)),,\qquad i,j,k\, \in \,\{ 1,2\}, }$$
i.e., the sign is + for i = 1 and − for i = 2. In the R/A formalism, G(aa) = 0 implies \(\varSigma ^{(rr)} = 0\). The previous equation can be rewritten with help of the \(\sigma _{1}\) Pauli matrix:
$$\displaystyle{ \mathcal{K}(p)G^{(ab)}(p) =\sigma _{ 1}^{ab'}\left [\delta ^{b'b} +\varSigma ^{(b'c)}(p)G^{(cb)}(p)\right ],\qquad a,b,b',c\, \in \,\{ r,a\}. }$$
This relation shows that the retarded propagator is not mixed with the others:
$$\displaystyle{ \mathcal{K}(p)G^{(ra)}(p) = 1 +\varSigma ^{(ar)}(p)G^{(ra)}(p). }$$
The generic relation between the propagators discussed in Sect. 2.4 implies relations between the self-energies. In particular,
$$\displaystyle{ \varSigma ^{(11)} =\varSigma ^{(ar)} -\varSigma ^{(12)} =\varSigma ^{(ra)} -\varSigma ^{(21)},\qquad \varSigma ^{(aa)} = -\frac{\varSigma ^{(12)} +\varSigma ^{(21)}} {2} }$$
$$\displaystyle{ \varSigma _{E}(\omega _{n}) =\varSigma ^{(ar)}(k_{ 0} \rightarrow i\omega _{n}). }$$
The causal behavior of the retarded propagator implies that \(\varSigma ^{(ar)}\) is either local or at least causal itself. Therefore, after separating the local contribution \(\varSigma _{0}\), one writes for the remainder (\(\varSigma _{1}^{(ar)}\)) a dispersive representation:
$$\displaystyle{ \varSigma ^{(ar)}(k) =\varSigma _{ 0} +\varSigma _{ 1}^{(ar)}(k),\qquad \varSigma _{ 1}^{(ar)}(k) = \int \!\frac{d\omega } {2\pi }\,\frac{\mathop{\text{Disc}}_{k_{0}}i\varSigma ^{(ar)}(k_{0},\mathbf{k})} {k_{0} -\omega +i\varepsilon }. }$$
The discontinuity can be expressed through other self-energies, too:
$$\displaystyle{ \text{Disc}\varSigma ^{(ar)} =\varSigma ^{(ar)} -\varSigma ^{(ra)} =\varSigma ^{(12)} -\varSigma ^{(21)} }$$
Finally, the relation between G(rr) and the retarded propagator implies that
$$\displaystyle{ \varSigma ^{(aa)} = \left (\frac{1} {2} +\alpha n_{\alpha }(k_{0})\right )\mathop{\text{Disc}}\ i\varSigma ^{(ar)} }$$
This means that full knowledge of the retarded self-energy (or equivalently, the Matsubara self-energy) is enough to fully reconstruct all the self-energies of the system.

2.7.2 Higher n-Point Functions

Further derivatives of (2.109) exploiting (2.101) provide relations between the 1PI and the connected n-point functions. In particular,
$$\displaystyle\begin{array}{rcl} iW_{ijk}^{(3)}& =& iG_{ ii'}iG_{jj'}iG_{kk'}i\varGamma _{i'j'k'}^{(3)}, \\ iW_{ijk\ell}^{(4)}& =& iG_{ ii'}iG_{jj'}i\varGamma _{i'j'k'}^{(3)}iG_{ k'a}i\varGamma _{abc}^{(3)}iG_{ bk}iG_{c\ell} +\ldots \\ & & \quad + iG_{ii'}iG_{jj'}iG_{kk'}iG_{\ell\ell'}i\varGamma _{i'j'k'\ell'}^{(4)}. {}\end{array}$$
Here we used the notation
$$\displaystyle{ \varGamma _{i_{1}\ldots i_{n}}^{(n)} = \frac{\partial ^{n}\varGamma [\bar{A}]} {\partial \bar{A}_{i_{1}}\ldots \partial \bar{A}_{i_{n}}},\qquad W_{i_{1}\ldots i_{n}}^{(n)} = \frac{\partial ^{n}W[J]} {\partial J_{i_{1}}\ldots \partial J_{i_{n}}}. }$$
In the case of fermions, one has to pay attention to the order of differentiation.

These formulas correspond to the tree-level relations between the n-point functions and the vertices when vertex strengths are derived from the classical action. The derivatives of the effective action therefore play the role of the classical vertices: hence the name proper vertices. In contrast to the classical theory, quantum fluctuations produce nonzero values for arbitrary high n-point functions.

2.8 The Two-Particle Irreducible Formalism

In the previous subsection we investigated the generator of the 1PI diagrams, where the system is forced by a choice of appropriate external current distribution to take a predetermined value \(\bar{A} = \left \langle A\right \rangle\) for the expectation value of the field operator. We can go on with this logic and try to impose other constraints on the path integral. The next step is to fix the value of the two-point functions as well. Technically, what we have to do is to assign an external current to the two-point function and determine its value from the requirement that the exact two-point function coincide with the a priori chosen expression.

Thus we define an extended generator functional containing both an external current J and an external bilocal source term R:
$$\displaystyle{ Z[J,R] = e^{W[J,R]} =\int \mathcal{D}Ae^{iS[A]+J_{x}A_{x}+\frac{1} {2} iR_{xy}A_{x}\,A_{y}}, }$$
where we have suppressed the explicit indication of summation/integration. We want to fix the values of the field expectation value to \(\bar{A}\) and the connected two-point function to \(\bar{G}\):
$$\displaystyle\begin{array}{rcl} \frac{\partial W[J,R]} {\partial J_{x}} & =& \left \langle A_{x}\right \rangle =\bar{ A}_{x}, \\ 2\frac{\partial W[J,R]} {\partial iR_{xy}} & =& \left \langle T_{C}A_{x}A_{y}\right \rangle = i\bar{G}_{xy} +\bar{ A}_{x}\bar{A}_{y}.{}\end{array}$$
From these equations, one can in principle determine J and R.
Just as in the 1PI case, we define the extended effective action as
$$\displaystyle{ i\varGamma [\bar{A},\bar{G}] = W[J,R] - J_{x}\bar{A}_{x} -\frac{1} {2}iR_{xy}(\bar{A}_{x}\bar{A}_{y} + i\bar{G}_{xy}). }$$
After differentiation (right differentiation in the case of fermi fields), this double Legendre transform provides us with the relations
$$\displaystyle{ \frac{\partial i\varGamma [\bar{A},\bar{G}]} {\partial \bar{A}_{x}} = -J_{x} - iR_{yx}\bar{A}_{y},\qquad \frac{\partial i\varGamma [\bar{A},\bar{G}]} {\partial i\bar{G}_{xy}} = -\frac{1} {2}iR_{xy}. }$$
In the physical point we should set the external sources to zero, and therefore, we have
$$\displaystyle{ \frac{\partial i\varGamma [\bar{A},\bar{G}]} {\partial \bar{A}_{x}} \biggr |_{phys} = 0,\qquad \frac{\partial i\varGamma [\bar{A},\bar{G}]} {\partial i\bar{G}_{xy}} \biggr |_{phys} = 0, }$$
which suggests that Γ is indeed the quantum analogue of the extended classical action. By substituting back the physical value of \(\bar{G}\) obtained with fixed \(\bar{A}\) function from the solution of the second equation, we obtain the (1PI) effective action:
$$\displaystyle{ \varGamma [\bar{A}] =\varGamma [\bar{A},\bar{G}_{phys}[\bar{A}]]. }$$
Since \(\bar{A}\) and \(\bar{G}\) are the exact one- and two-point functions, respectively, there should be no quantum correction to them, and this is true even on the internal lines of the diagrams. In the case of the predetermined one-point function, this has had the consequence that the effective action is the generator of 1PI diagrams. In a similar manner for the case of the exact two-point function, only two-particle-irreducible (2PI) diagrams can appear in the perturbation theory, which do not fall apart when we cut two internal lines of a diagram. In other words, we do not have perturbative self-energy corrections to the propagators representing the lines in the corresponding Feynman diagrams (called also skeleton diagrams). If we apply this requirement to the physical value of the propagator, using the fact that on the one hand, \(\left \langle T_{C}AA\right \rangle _{conn}\) equals \(i\bar{G}_{phys}\), and on the other hand, we can express it by the self-energy, we obtain
$$\displaystyle{ \bar{G}_{phys}^{-1} = G_{ 0}^{-1} -\varSigma _{ 2PI}[\bar{A},\bar{G}_{phys}]. }$$
This is a self-consistent (gap) equation that allows to determine the value of the physical connected propagator.

In special cases, we can determine \(\varGamma [\bar{A},\bar{G}]\). If there is no quantum correction at all, then the path integral substitutes the solution of the classical EoM, which is the same as the first equation of (2.129) with Γ → S. Classically, \(\bar{G} = 0\), and so the classical 2PI effective action reverts to the classical 1PI case, i.e., Γ = S.

In the free case, we can solve the system even in the presence of external sources. In bosonic theories, in copying the steps leading to (2.78), one arrives at
$$\displaystyle{ W[J,R] = \frac{1} {2}J\,i(G_{0}^{-1} + R)^{-1}J -\frac{1} {2}\mathop{\text{Tr}}\ln (G_{0}^{-1} + R). }$$
This implies
$$\displaystyle\begin{array}{rcl} \frac{\partial iW[J,R]} {\partial J} & =& i(G_{0}^{-1} + R)^{-1}J =\bar{ A}, \\ 2\frac{\partial iW[J,R]} {\partial iR} & =& J\,i(G_{0}^{-1} + R)^{-2}J + i(G_{ 0}^{-1} + R)^{-1} = i\bar{G}_{ xy} +\bar{ A}_{x}\bar{A}_{y},{}\end{array}$$
which means
$$\displaystyle\begin{array}{rcl} & & G_{0}^{-1} + R =\bar{ G}^{-1},\qquad \bar{A} = i\bar{G}J, \\ & & i\varGamma [\bar{A},\bar{G}] = iS[\bar{A}] -\frac{1} {2}\mathop{\text{Tr}}\ln \bar{G}^{-1} -\frac{1} {2}\mathop{\text{Tr}}G_{0}^{-1}\bar{G}.{}\end{array}$$
Motivated by the result of the free theory, in the interacting case we write
$$\displaystyle{ i\varGamma [\bar{A},\bar{G}] = iS[\bar{A}] -\frac{1} {2}\mathop{\text{Tr}}\ln \bar{G}^{-1} -\frac{1} {2}\mathop{\text{Tr}}G_{0}^{-1}\bar{G} + i\varGamma _{ int}[\bar{A},\bar{G}], }$$
where in the correction part Γ int , we should use the exact propagator \(\bar{G}\) and keep only the 2PI diagrams. The physical value of the propagator, using (2.130), leads to (2.132) with
$$\displaystyle{ \varSigma _{2PI}[\bar{A},\bar{G}] = 2\frac{\partial i\varGamma _{int}[\bar{A},\bar{G}]} {\partial \bar{G}}. }$$

A peculiarity of the 2PI formalism is that we can access the same correlation functions in different ways. For example, the self-energy can be expressed from (2.137) as the derivative of the 2PI action with respect to \(\bar{G}\), but it is also the second derivative with respect to \(\bar{A}\). This ambiguity remains true for all higher point functions. For the exact correlation functions, these definitions yield the same result, but in the perturbative expansion, usually we find deviations. The derivative of the 1PI effective action, for example, always respects the symmetry of the Lagrangian, and so it satisfies Goldstone’s theorem in the case of spontaneous symmetry-breaking (SSB) of a continuous symmetry. This is not true for the solution of the 2PI equation (2.132) if we substitute back the physical value of the background. The reason is that Goldstone’s theorem is satisfied at each order of perturbation theory as a result of subtle cancellations between the self-energy and the vertex corrections. Therefore, when we resum all the self-energy diagrams up to a certain order using the 2PI formalism and leave the vertex corrections at their tree-level perturbative value, we easily find a mismatch. We will return to this point later, in Sect.  4.7

This line of thought can be continued, and we can demand that the correlation functions up to n-point functions have predetermined values, denoted by \(\bar{A} \equiv V _{1},\bar{G} \equiv V _{2}\) and V k ,  k > 2; in general, we refer to them as V k (\(k = 1,\mathop{\ldots },n\)) (cf. [11]). Similarly as above, we can force the system to provide the required n-point functions by introducing external currents besides J x , R xy . These will be denoted by \(R_{x_{1}\ldots x_{n}}^{(k)}\) (\(k = 1,\mathop{\ldots },n\)):
$$\displaystyle{ iS \rightarrow iS +\sum _{ k=1}^{n} \frac{1} {k!}R_{x_{1},\mathop{\ldots },x_{x}}^{(k)}A_{ x_{1}}\ldots A_{x_{k}}. }$$
The derivative of W[R(k)] with respect to these sources will give us the k-point functions. We require that the fully connected part coincide with the predefined values.

We can also perform a Legendre transformation to obtain the free energy Γ[V k ], which is the functional of V k , where we can also express the external sources R(k) through V k . The physical value of the vertices comes from the requirements \(\delta \varGamma [V _{k}]/\delta V _{j} = 0\). Since we fixed all the correlation functions up to the n-point functions, the perturbative expansion should contain only n-particle-irreducible (nPI) diagrams, which remain connected even after cutting n internal lines.

2.9 Transformations of the Path Integral

In this subsection we will perform some changes in the integration variable of the path integral
$$\displaystyle{ Z[J] =\int \mathcal{D}Ae^{iS[A]+JA}. }$$
Let us consider an infinitesimal transformation \(A_{i} \rightarrow A'_{i} = A_{i} +\varepsilon _{i}\varDelta A_{i}[A]\) for which the corresponding Jacobian is unity to linear order in \(\varepsilon _{i}\) (no summation is understood for i):
$$\displaystyle{ \ln \,\mathrm{Jacobian} =\ln \det \left (\delta _{ij} +\varepsilon _{i} \frac{\partial \varDelta A_{i}} {\partial A_{j}}\right ) =\mathop{ \text{Tr}}\varepsilon _{i} \frac{\partial \varDelta \ A_{i}} {\partial A_{j}} + \mathcal{O}(\varepsilon ^{2})\stackrel{!}{=}0. }$$
The change in the path integral up to leading order in \(\varepsilon _{i}\) reads as
$$\displaystyle\begin{array}{rcl} Z[J]& =& \int \mathcal{D}A'e^{iS[A']+JA'} =\int \mathcal{D}Ae^{iS[A+\varepsilon _{i}\varDelta A_{i}]+J(A+\varepsilon _{i}\varDelta A_{i})} \\ & =& \int \mathcal{D}Ae^{iS[A]+JA}\left (1 + i\varepsilon _{ i}\frac{\partial S} {\partial \varepsilon _{i}} + J_{i}\varepsilon _{i}\varDelta A_{i}\right ) \\ & =& Z[J]\left (1 + i\varepsilon _{i}\left \langle \frac{\partial S} {\partial \varepsilon _{i}} - iJ_{i}\varDelta A_{i}\right \rangle \right ). {}\end{array}$$
The invariance under this change of variables implies
$$\displaystyle{ \left \langle \frac{\partial S[A]} {\partial \varepsilon _{i}} - iJ_{i}\varDelta A_{i}[A]\right \rangle = 0, }$$
a true set of relations for any transformation.
One way in which we can exploit this equation is to take its nth derivative with respect to \(J_{a_{1}},\mathop{\ldots },J_{a_{n}}\):
$$\displaystyle{ \left \langle \left (\frac{\delta S} {\delta \varepsilon _{i}} - iJ_{i}\varDelta A_{i}\right )A_{a1}\ldots A_{a_{n}} - i\sum _{k=1}^{n}\delta _{ ia_{k}}A_{a1}\ldots A_{a_{k-1}}\varDelta A_{a_{k}}A_{a_{k+1}}\ldots A_{a_{n}}\right \rangle = 0. }$$
In the physical vacuum, J = 0, and we obtain
$$\displaystyle{ \left \langle \frac{\delta S} {\delta \varepsilon _{i}} A_{a1}\ldots A_{a_{n}}\right \rangle _{phys} = i\sum _{k=1}^{n}\delta _{ ia_{k}}\left \langle A_{a1}\ldots A_{a_{k-1}}\varDelta A_{a_{k}}A_{a_{k+1}}\ldots A_{a_{n}}\right \rangle _{phys}. }$$
We can also use (2.142) to find equations for W and Γ, exploiting that
$$\displaystyle{ \left \langle f(A)\right \rangle = \frac{1} {Z[J]}f\left ( \frac{\partial } {\partial J}\right )Z[J] = e^{-W}f\left ( \frac{\partial } {\partial J}\right )e^{W} = f\left ( \frac{\partial } {\partial J} + \frac{\partial W} {\partial J} \right ). }$$
Therefore, we should substitute \(A \rightarrow \partial _{J} + \partial W/\partial J\) in (2.142) to obtain its functional expression and set J = 0 at the end.
To have an equation for the effective action, we use (2.103) and
$$\displaystyle{ \frac{\partial } {\partial J_{x}} = \frac{\partial A_{y}} {\partial J_{x}} \frac{\partial } {\partial A_{y}} = \frac{\partial ^{2}W} {\partial J_{x}\partial J_{y}} \frac{\partial } {\partial A_{y}} = iG_{xy} \frac{\partial } {\partial A_{y}} }$$
(in the fermionic case, left differentiation should be applied here). Since \(iJ\,=\,\partial \varGamma /\partial A\) (with right differentiation), we have
$$\displaystyle{ \frac{\partial } {\partial \varepsilon _{i}}S\left [A + iG \frac{\partial } {\partial A}\right ] = \varDelta A_{i}\left [A + iG \frac{\partial } {\partial A}\right ] \frac{\partial \varGamma } {\partial A_{i}}. }$$
The expressions in the square brackets denote the “variables” of the functionals S and Δ A i , respectively. We should emphasize that (2.142) and therefore (2.147) are true for every transformation. Specifying the form of the transformation leads to different applications.

2.9.1 Equation of Motion, Dyson–Schwinger Equations

In the simplest application, the transformation is a simple shift of the field value, that is, \(A'(x) = A(x) +\varepsilon (x)\). Then Δ A = 1, and the \(\varepsilon\) derivative is the same as the A derivative. Then we obtain the Dyson–Schwinger equations
$$\displaystyle{ \left \langle \frac{\partial S} {\partial A_{i}} - iJ_{i}\right \rangle = 0\qquad \mathrm{or}\qquad \frac{\partial S} {\partial A_{i}}\left [A + iG \frac{\partial } {\partial A}\right ] = \frac{\partial \varGamma } {\partial A_{i}}. }$$
In particular, for the physical vacuum (J = 0), we obtain the EoM
$$\displaystyle{ \left \langle \frac{\partial S} {\partial A_{i}}\right \rangle \biggr |_{phys} = 0\qquad \mathrm{or}\qquad \frac{\partial S} {\partial A_{i}}\left [A + iG \frac{\partial } {\partial A}\right ]\biggr |_{phys} = 0. }$$
An illustrative example of the application of these general formulas is presented in the next subsection. Taking the derivative of the previous equation with respect to \(J_{a_{1}},\mathop{\ldots },J_{a_{n}}\) we obtain, in the physical vacuum J = 0,
$$\displaystyle{ \left \langle \frac{\delta S} {\delta A_{i}}A_{a1}\ldots A_{a_{n}}\right \rangle = i\sum _{k=1}^{n}\delta _{ ia_{k}}\left \langle A_{a1}\ldots A_{a_{k-1}}A_{a_{k+1}}\ldots A_{a_{n}}\right \rangle. }$$
This relation corresponds to (2.144) for the actual transformation.
In the real time formalism we should take into account that the Lagrangian is \(\mathcal{L} = \mathcal{L}[A^{(1)}] -\mathcal{L}[A^{(2)}]\), and so
$$\displaystyle{ \frac{\partial S} {\partial A^{(i)}} = (-1)^{i+1} \frac{\partial S} {\partial A}\biggr |_{A=A^{(i)}}. }$$
$$\displaystyle{ \left \langle \frac{\partial S} {\partial A}\biggr |_{A=A^{(i)}}A_{a1}\ldots A_{a_{n}}\right \rangle = i(-1)^{i+1}\sum _{ k=1}^{n}\delta _{ ia_{k}}\left \langle A_{a1}\ldots A_{a_{k-1}}A_{a_{k+1}}\ldots A_{a_{n}}\right \rangle. }$$

2.9.2 Ward Identities from a Global Symmetry

If the transformation \(A' = A +\varepsilon \varDelta A\) corresponds to a global symmetry of the system, then the derivative is (cf. (2.2))
$$\displaystyle{ \frac{\partial S} {\partial \varepsilon _{i}} = \partial _{\mu }j_{i}^{\mu }. }$$
We remark here that j μ is the conserved current, which is a functional of the field variables, while J is the current assigned to sustain a certain field configuration. In this way, we obtain
$$\displaystyle{ \left \langle \partial _{\mu }j_{i}^{\mu } - iJ_{ i}\varDelta A_{i}\right \rangle = 0\qquad \mathrm{or}\qquad \partial _{\mu }j^{\mu }\left [A + iG \frac{\partial } {\partial A}\right ] = \frac{\partial \varGamma } {\partial A_{i}}\varDelta A_{i}\left [A + iG \frac{\partial } {\partial A}\right ]. }$$
For the n-point function, we obtain, from (2.144),
$$\displaystyle{ \partial _{\mu }\left \langle j_{i}^{\mu }A_{ a1}\ldots A_{a_{n}}\right \rangle = i\sum _{k=1}^{n}\delta _{ ia_{k}}\left \langle A_{a1}\ldots A_{a_{k-1}}\varDelta A_{i}A_{a_{k+1}}\ldots A_{a_{n}}\right \rangle. }$$
This is the generic form of the Ward identities. These are nothing other than the current conservation equations for the quantum case.
We discuss here three short applications.
  1. 1.
    If we integrate over the variable i (which also contains the spacetime integration), we obtain
    $$\displaystyle{ 0 =\sum _{ k=1}^{n}\left \langle A_{ a1}\ldots A_{a_{k-1}}\varDelta A_{a_{k}}A_{a_{k+1}}\ldots A_{a_{n}}\right \rangle. }$$
    The right-hand side is the total change in the expectation value
    $$\displaystyle{ 0 =\varDelta \left \langle A_{a_{1}}\ldots A_{a_{n}}\right \rangle. }$$
    This means that the expectation value of every n-point function is invariant under the global symmetry transformation. Note that this is true for the complete n-point function, not just the connected part.
  2. 2.
    In the case of linear continuous transformations, \(\varDelta A_{i} = T_{ij}A_{j}\). Taking the integrated form of the effective action expression with a constant (i.e., zero-momentum) A field, we have
    $$\displaystyle{ 0 = \frac{\partial \varGamma } {\partial A_{i}}T_{ij}A_{j}, }$$
    where one sums over repeated indices. Taking its functional derivative yields
    $$\displaystyle{ \frac{\partial ^{2}\varGamma } {\partial A_{k}\partial A_{i}}T_{ij}A_{j} + \frac{\partial \varGamma } {\partial A_{i}}T_{ik} = 0. }$$
    In the physical point, \(\frac{\partial \varGamma } {\partial A_{i}} = 0\), but the field itself can take a (constant) expectation value in the SSB case. Therefore,
    $$\displaystyle{ \frac{\partial ^{2}\varGamma } {\partial A_{k}\partial A_{i}}T_{ij}A_{j} = 0, }$$
    which is Goldstone’s theorem: the inverse propagator has a zero mode at zero momentum in the SSB case.
  3. 3.
    Remaining in the class of linearly represented symmetries, we write down the Ward identity for two fields explicitly, displaying the spacetime indices:
    $$\displaystyle{ \partial _{x}^{\mu }\left \langle j_{\mu }(x)A_{ i}(y)A_{j}(z)\right \rangle = i\delta (x - y)T_{ik}\left \langle A_{k}(y)A_{j}(z)\right \rangle + i\delta (x - z)T_{jk}\left \langle A_{i}(y)A_{k}(z)\right \rangle. }$$
    We assume that we are in the symmetric case (the SSB case is only formally more complicated). One introduces for the left-hand side a 1PI proper vertex:
    $$\displaystyle{ \left \langle j_{\mu }(x)A_{i}(y)A_{j}(z)\right \rangle = iG_{ii'}(y - y')\,iG_{jj'}(z - z')\,\varGamma _{i'j'}^{\mu }(x,y',z'). }$$
    The Ward identity takes the following form after passage to Fourier space:
    $$\displaystyle{ G_{ii'}(p)G_{jj'}(q)(-ik_{\mu })\varGamma _{i'j'}^{\mu }(k,p,q) = T_{ ik}G_{jk}(q) + T_{jk}G_{ik}(p), }$$
    where k + p + q = 0. Multiplying by the inverse propagators gives us
    $$\displaystyle{ (-ik_{\mu })\varGamma _{ij}^{\mu }(k,p,q) = G_{ ik}^{-1}(p)T_{ kj} + G_{jk}^{-1}(q)T_{ ki}. }$$
    There is a special case, in which the propagation is diagonal (\(G_{ij} = G_{i}\delta _{ij}\) and \(T_{ij} = -T_{ji}\). Then
    $$\displaystyle{ (-ik_{\mu })\varGamma _{ij}^{\mu }(k,p,q) = T_{ ij}\left (G_{i}^{-1}(p) - G_{ j}^{-1}(q)\right ). }$$
    This is an often used consequence of the Ward identity.

2.10 Example: Φ4 Theory at Finite Temperature

The simplest example for perturbation theory that is appropriate for demonstrating the mechanisms of finite temperature calculations is the Φ4 theory of a one-component scalar field. The Lagrangian reads as
$$\displaystyle{ \mathcal{L} = \frac{1} {2}\varPhi (-\partial ^{2} - m^{2})\varPhi - \frac{\lambda } {24}\varPhi ^{4}. }$$
This is the basic form of the Lagrangian. However, we should use it in other forms at finite temperature. The free part was discussed in Sect. 2.5; the propagators are listed in (2.76). Here the Klein–Gordon divisor is 1. The Euclidean and Keldysh forms of the interactions are respectively
$$\displaystyle{ \mathcal{L}_{E,int} = \frac{\lambda } {24}\varPhi ^{4},\qquad \mathcal{L}_{ int} = - \frac{\lambda } {24}\left (\varPhi _{1}^{4} -\varPhi _{ 2}^{4}\right ), }$$
while in the R/A formalism, we obtain
$$\displaystyle{ \mathcal{L}_{int} = -\frac{\lambda } {6}\varPhi _{r}^{3}\varPhi _{ a} - \frac{\lambda } {24}\varPhi _{r}\varPhi _{a}^{3}. }$$
In the perturbation theory, first we calculate the 1PI correction to the 2-point function (the self-energy). This is the amputated connected 2-point function, as can be seen from (2.115). At one loop level in the coordinate space in the Euclidean formalism we obtain
$$\displaystyle{ \varSigma _{E}(x) = -\left \langle \text{T}_{C}\varPhi ^{(3)}(x)\varPhi ^{(3)}(0)(-S_{ E,int})\right \rangle _{amp} = \frac{\lambda } {2}G^{(33)}(0)\delta (x) = \frac{\lambda } {2}\mathcal{T}\delta (x), }$$
where \(\mathcal{T}\) is the tadpole function defined in Appendix B.1. In momentum space, the self-energy is a constant. Therefore, in the R/A formalism, \(\varSigma ^{(ar)} =\varSigma ^{(ra)} =\varSigma _{E}\) and \(\varSigma ^{(aa)} = 0\). Physically, it is a mass correction. Using the results of Appendix B.1 derived with dimensional regularization, at zero temperature we obtain for the second derivative of the effective action
$$\displaystyle{ \varGamma ^{(2)}(p) = p^{2} + m^{2} + \frac{\lambda m^{2}} {32\pi ^{2}}\left [-\frac{1} {\varepsilon } +\gamma _{E} - 1 +\ln \frac{m^{2}} {4\pi \mu ^{2}} \right ], }$$
while in the high-temperature expansion, we have
$$\displaystyle{ \varGamma ^{(2)}(p) = p^{2} + m^{2} + \frac{\lambda T^{2}} {24} -\frac{\lambda mT} {8\pi } + \frac{\lambda m^{2}} {32\pi ^{2}}\left [-\frac{1} {\varepsilon } +\gamma _{E} -\ln \frac{4\pi T^{2}} {\mu ^{2}} \right ]. }$$
In the symmetric phase, the three-point function vanishes; the four-point proper vertex is therefore the amputated four-point function. The first quantum correction is of second order in the coupling constant expansion. In the Euclidean formalism,
$$\displaystyle{ \varGamma _{E}^{(4)}(p_{ 1},p_{2},p_{3},p_{4}) = -\left \langle \text{T}_{C}\varPhi (p_{1})\varPhi (p_{2})\varPhi (p_{3})\varPhi (p_{4})(1 - S_{E,int} + \frac{1} {2}S_{E,int}^{2})\right \rangle _{ amp}, }$$
where we should use fields on the Euclidean contour (segment 3). The first term gives a disconnected piece. For the connected part, we have from momentum conservation
$$\displaystyle{\varGamma _{E,conn}^{(4)}(p_{ 1},p_{2},p_{3},p_{4}) =\bar{\varGamma }_{ E}^{(4)}(p_{ 1},p_{2},p_{3},p_{4})(2\pi )^{4}\delta (p_{ 1} + p_{2} + p_{3} + p_{4}),}$$
$$\displaystyle{ \bar{\varGamma }_{E}^{(4)}(p_{ 1},p_{2},p_{3},p_{4}) =\lambda -\frac{\lambda ^{2}} {2}\left [\mathcal{I}_{E}(p_{1} + p_{2}) + \mathcal{I}_{E}(p_{1} + p_{3}) + \mathcal{I}_{E}(p_{1} + p_{4})\right ], }$$
where \(\mathcal{I}_{E}(k)\) is the contribution from the bubble diagram, defined in Appendix B.2.
We can also give the effective potential to this lowest order. We use the background field method of (2.106), i.e., we shift the field \(\varPhi \rightarrow \varphi +\bar{\varPhi }\) and omit linear terms in the fluctuations (and at higher order all contributions to the 1-point function). We obtain for the shifted Lagrangian
$$\displaystyle{ \mathcal{L} = \mathcal{L}(\bar{\varPhi }) + \frac{1} {2}\varphi (-\partial ^{2} - m^{2} - \frac{\lambda } {2}\bar{\varPhi }^{2})\varphi - \frac{\lambda } {6}\bar{\varPhi }\varphi ^{3} - \frac{\lambda } {24}\bar{\varPhi }^{4}. }$$
The first term gives the classical effective potential. For the first quantum corrections, we need just the free fluctuation part, which will lead to some quantum correction δ Γ. We could use the formula (2.86) with μ = 0, α = 1, but here we should keep the regularized form. We can use that for constant fields, \(\frac{\partial \delta \varGamma } {\partial \bar{\varPhi }^{2}} = \frac{1} {2}\varSigma _{E}(p = 0,\bar{\varPhi })\). In the self-energy one can take p = 0, since it is a tadpole, and does not depend on the momentum in any way. In \(\varSigma\), all the Φ-dependence is through the mass term. Therefore, \(\varSigma _{E}(M) =\lambda \frac{\partial \delta \varGamma } {\partial M^{2}}\), where \(M^{2} = m^{2} + \frac{\lambda } {2}\bar{\varPhi }^{2}\). Using (2.169), we have
$$\displaystyle{ \frac{\partial \delta \varGamma } {\partial M^{2}} = \frac{1} {2}\mathcal{T} (M,T). }$$
The integration constant is \(\delta \varGamma \vert _{M=0} = \frac{\pi ^{2}T^{4}} {90}\). Finally, we obtain at zero temperature
$$\displaystyle{ \delta \varGamma _{E}(T = 0) \equiv V _{eff}(T = 0) = \frac{m^{2}} {2} \bar{\varPhi }^{2} + \frac{\lambda } {24}\bar{\varPhi }^{4} + \frac{M^{4}} {64\pi ^{2}} \left [\mathcal{D}_{\varepsilon } +\ln \frac{M^{2}} {4\pi \mu ^{2}} \right ], }$$
and in the high-temperature expansion,
$$\displaystyle{ V _{eff} = \frac{\pi ^{2}T^{4}} {90} + \frac{m^{2}} {2} \bar{\varPhi }^{2} + \frac{\lambda } {24}\bar{\varPhi }^{4} + \frac{T^{2}M^{2}} {24} -\frac{M^{3/2}T} {12\pi } + \frac{M^{4}} {64\pi ^{2}} \left [\mathcal{D}_{\varepsilon } + \frac{3} {2} -\ln \frac{4\pi T^{2}} {\mu ^{2}} \right ], }$$
where \(\mathcal{D}_{\varepsilon } = -\frac{1} {\varepsilon } +\gamma _{E} -\frac{3} {2}\) is a divergent constant. We remark here that in the spontaneously broken phase, where m2 < 0, the quantum corrections cannot be interpreted physically for \(\bar{\varPhi }^{2}\) values yielding M2 < 0 because of the M3∕2 term.
We may also illustrate the Dyson–Schwinger equations by means of the example of the Φ4 model. We begin with the generator of the 1PI DS equations (2.148):
$$\displaystyle{ \varGamma _{x}^{(1)} = \mathcal{K}_{ xy}\varPhi _{y} - \frac{\lambda } {6}\left (\varPhi _{x}^{3} + 3\varPhi _{ x}iG_{xx} + iG_{xx'}iG_{xy'}iG_{xz'}\varGamma _{x'y'z'}^{(3)}\right ), }$$
where \(\mathcal{K}(p) = p^{2} - m^{2}\). Here the indices of the different quantities refer simply to spacetime points, x is the free argument, and one has summation on the others. After differentiating once more with respect to Φ y , we obtain
$$\displaystyle\begin{array}{rcl} \varGamma _{xy}^{(2)}& =& \mathcal{K}_{ xy} - \frac{\lambda } {2}\left (\varPhi _{x}^{2} + iG_{ xx}\right )\delta _{xy} \\ & -& \frac{\lambda } {2}\left (\varPhi _{x}iG_{xa}iG_{xb}i\varGamma _{aby}^{(3)} + iG_{ xy'}iG_{xz'}\varGamma _{x'y'z'}^{(3)}iG_{ xa}iG_{x'b}i\varGamma _{aby}^{(3)}\right ) - \\ &-& \frac{\lambda } {6}iG_{xx'}iG_{xy'}iG_{xz'}\varGamma _{x'y'z'y}^{(4)}. {}\end{array}$$
As a specific application, we can reproduce the perturbative results: we go to the symmetric phase, where Φ = 0, and we restrict ourselves to the lowest order (i.e., we neglect terms \(\mathcal{O}(\lambda ^{2})\)). We obtain in the Fourier space
$$\displaystyle{ \varGamma ^{(2)}(p) = p^{2} - m^{2} - \frac{\lambda } {2}iG_{xx}. }$$
This form actually agrees with the 2PI equation (2.132) with the self-energy from (2.169). Therefore, in fact, this equation is a gap equation for the propagator.


  1. 1.

    We remark that in (2 + 1)-dimensional field theories, there is a possibility to define particles with any α with unit length. These are called anyons.

  2. 2.

    This vacuum contribution used to be attributed to the Casimir effect, where we measure the energy difference arising when the volume of the quantization space is changed. It is also worth noting that fermions and bosons contribute with opposite signs, which means that in supersymmetric models, the net zero-point contribution is zero.


  1. 1.
    M.E. Peskin, D.V. Schroeder, An Introduction to QFT (Westview Press, New York, 1995)Google Scholar
  2. 2.
    C. Itzykson, J.-B. Zuber, Quantum Field Theory (McGraw-Hill, New York, 1980)Google Scholar
  3. 3.
    J.D. Bjorken, S.D. Drell, Relativistic Quantum Fields (McGraw-Hill, New York, 1965)Google Scholar
  4. 4.
    N.P. Landsmann, Ch.G. van Weert, Real- and imaginary-time field theory at finite temperature and density. Phys. Rep. 145, 141–249 (1987)Google Scholar
  5. 5.
    M. Le Bellac, Thermal Field Theory (Cambridge University Press, Cambridge, 1996)CrossRefGoogle Scholar
  6. 6.
    J.I. Kapusta, C. Gale, Finite-Temperature Field Theory, Principles and Applications (Cambridge University Press, Cambridge, 2006)CrossRefGoogle Scholar
  7. 7.
    I. Montvay, G. Münster, Quantum Fields on a Lattice (Cambridge University Press, Cambridge, 1994)CrossRefGoogle Scholar
  8. 8.
    J. Collins, Renormalization. Cambridge Monographs for Mathematical Physics (Cambridge University Press, Cambridge, 1984)Google Scholar
  9. 9.
    L.D. Landau, E.M. Lifshitz, Course of theoretical physics, in The Classical Theory of Fields, vol. 2 (Butterworth-Heinemann, Oxford, 1975)Google Scholar
  10. 10.
    M. Garny, M.M. Muller, Phys. Rev. D 80, 085011 (2009)CrossRefADSGoogle Scholar
  11. 11.
    J. Berges, Phys. Rev. D 70, 105010 (2004)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  • Antal Jakovác
    • 1
  • András Patkós
    • 1
  1. 1.Institute of PhysicsRoland Eötvös UniversityBudapestHungary

Personalised recommendations