Advertisement

Long Term Behaviour of a Reversible System of Interacting Random Walks

  • Svante Janson
  • Vadim ShcherbakovEmail author
  • Stanislav Volkov
Open Access
Article
  • 78 Downloads

Abstract

This paper studies the long-term behaviour of a system of interacting random walks labelled by vertices of a finite graph. We show that the system undergoes phase transitions, with different behaviour in various regions, depending on model parameters and properties of the underlying graph. We provide the complete classification of the long-term behaviour of the corresponding continuous time Markov chain, identifying whether it is null recurrent, positive recurrent, or transient. The proofs are partially based on the reversibility of the model, which allows us to use the method of electric networks. We also provide some alternative proofs (based on the Lyapunov function method and the renewal theory), which are of interest in their own right, since they do not require reversibility and can be applied to more general situations.

Keywords

Markov chain Random walk Transience Recurrence Lyapunov function Martingale Renewal measure Return time 

Mathematics Subject Classification

60K35 60G50 

1 Introduction

Let G be a finite non-oriented graph with \(n\ge 1\) vertices labelled by \(1,2,\dots , n\). Somewhat abusing notation, we will use G also for the set of the vertices of this graph. Let \(A=(a_{ij})\) be the adjacency matrix of the graph, that is \(a_{ij}=a_{ji}=1\) or \(a_{ij}=0\) according to whether vertices i and j are adjacent (connected by an edge) or not. If vertices \(i, j\in G\) are connected by an edge, i.e. \(a_{ij}=1\), call them neighbours and write \(i\sim j\). By definition, a vertex is not a neighbour of itself, i.e. \(a_{ii}=0\) for all \(i=1,\dots ,n\) (i.e. there are no self-loops).

Let \({\mathbb {Z}}_{+}\) be the set of all non-negative integers including zero. Consider a continuous-time Markov chain, CTMC for short, \(\xi (t)=(\xi _1(t),\ldots , \xi _n(t))\in {\mathbb {Z}}_{+}^{n}\) evolving as follows. Given \(\xi (t)=\xi =(\xi _1,\dots , \xi _n)\in {\mathbb {Z}}_{+}^{n}\), a component \(\xi _i\) increases by 1 at the rate
$$\begin{aligned} e^{\alpha \xi _i+\beta (A\xi )_i}=e^{\alpha \xi _i+\beta \sum _{j: j\sim i}\xi _j}, \end{aligned}$$
(1.1)
where \(\alpha , \beta \in {{\mathbb {R}}}\) are two given constants. A positive component \(\xi _i\) decreases by 1 at constant rate 1.
In other words, denoting the rates of the CTMC \(\xi (t)\) by \(q_{\xi ,\eta }\), for \(\xi ,\eta \in {\mathbb {Z}}_+^n\), we have
$$\begin{aligned} q_{\xi ,\eta }= {\left\{ \begin{array}{ll} e^{\alpha \xi _i+\beta (A\xi )_i}=e^{\alpha \xi _i+\beta \sum _{j: j\sim i}\xi _j}, &{} \eta =\xi +\mathbf{e}_i, \\ 1, &{} \eta =\xi -\mathbf{e}_i,\, \text {if }\, \xi _i>0, \\ 0, &{} ||\eta -\xi ||\ne 1, \end{array}\right. } \end{aligned}$$
(1.2)
where \(\mathbf{e}_i\in {\mathbb {Z}}_{+}^n\) is the i-th unit vector, and \(||\cdot ||\) denotes the usual Euclidean norm.

It is easy to see that if \(\beta =0\), then CTMC \(\xi (t)\) is a collection of n independent reflected continuous-time random walks on \({\mathbb {Z}}_{+}\) (symmetric if also \(\alpha =0\)). In general, the Markov chain can be regarded as an inhomogeneous random walk on the infinite graph \({\mathbb {Z}}_{+}^G\). Alternatively, it can be interpreted as a system of n random walks on \({\mathbb {Z}}_{+}\) labelled by the vertices of graph G and evolving subject to a nearest neighbour interaction.

The purpose of the present paper is to study how the long term behaviour of CTMC \(\xi (t)\) depends on the parameters \(\alpha \) and \(\beta \) together with properties of the graph G. In our main result (Theorem 2.1), we give a complete classification saying whether the Markov chain is recurrent or transient, and in the recurrent case whether it is positive recurrent or null recurrent. We find phase transitions, with different behaviour in various regions depending on the parameters \(\alpha \), \(\beta \) and properties of graph G. Furthermore, we give results (Theorem 6.1) on whether the Markov chain is explosive or not. (This is relevant for the transient case only, since a recurrent CTMC always is non-explosive.) These results are less complete and leave one case open.

It is obvious that CTMC \(\xi (t)\) is irreducible; hence the initial distribution is irrelevant for our results. (We may if we like assume that we start at \({{\mathbf {0}}}=(0,\dots ,0)\in {\mathbb {Z}}_+^n\).)

CTMC \(\xi (t)\) was introduced in [13], where its long term behaviour was studied in several cases. In particular, conditions for positive or null recurrence and transience were obtained in some special cases; these results are extended in the present paper. In addition, the typical asymptotic behaviour of the Markov chain was studied in some transient cases.

One example of our results is the case \(\alpha <0\) and \(\beta >0\), which is of a particular interest because of the following phenomenon observed in [13] in some special cases. If \(\alpha <0\) and \(\beta =0\), then, as said above, CTMC \(\xi (t)\) is formed by a collection of independent positive recurrent reflected random walks on \({\mathbb {Z}}_{+}\), and is thus positive recurrent. If both \(\alpha <0\) and \(\beta <0\), then the Markov chain is still positive recurrent (as shown below). The interaction in this case is, in a sense, competitive, as neighbours obstruct the growth of each other. Now keep \(\alpha <0\) fixed but let \(\beta >0\). If \(\beta \) is positive, but not large, then one could intuitively expect that the Markov chain is still positive recurrent (“stable”), as the interaction (cooperative in this case) is not strong enough. On the other hand, if \(\beta >0\) is sufficiently large, then the intuition suggests that the Markov chain becomes transient (“unstable”). It turns out that this is correct and that the phase transition in the model behaviour occurs at the critical value \(\beta =\frac{|\alpha |}{\lambda _1(G)}\), where \(\lambda _1(G)\) is the largest eigenvalue of (the adjacency matrix of) the graph G. Namely, if \(\beta <\frac{|\alpha |}{\lambda _1(G)}\) then the Markov chain is positive recurrent, and if \(\beta \ge \frac{|\alpha |}{\lambda _1(G)}\) then the Markov chain is transient. Moreover, it turns out that exactly at the critical regime, i.e., \(\beta =\frac{|\alpha |}{\lambda _1(G)}\), the Markov chain is non-explosive transient. We conjecture that if \(\beta >\frac{|\alpha |}{\lambda _1(G)}\), then it is explosive transient. This remains as an open problem in the general case (see Remark 6.1 below). Another important contribution of this paper to the previous study of the Markov chain is a recurrence/transience classification in the case \(\alpha =0\) and \(\beta <0\). This case was discussed in [13] only for the simplest graph with two vertices. We show that in general there are only two possible long term behaviours of the Markov chain if \(\alpha =0\) and \(\beta <0\). Namely, CTMC \(\xi (t)\) is either non-explosive transient or null recurrent, and this depends only on the independence number of the graph G.

We also consider some variations of the Markov chain defined above. First, we include in our results the Markov chain above with dynamics obtained by setting \(\beta =-\infty \) (with convention \(0\cdot \infty =0\)). In other words, a component cannot jump up (only down, when possible), if at least one of its neighbours is non-zero; this can thus be interpreted as hard-core interaction. See Sect. 3.3 for more details on this hard-core case.

In Sect. 5 we consider the discrete time Markov chain (DTMC) \(\zeta (t)\in {\mathbb {Z}}_{+}^n\) that corresponds to CTMC \(\xi (t)\), i.e. the corresponding embedded DTMC. We show that our main results also apply to this DTMC.

Finally, in Sect. 7, we study the CTMC with the rates given by
$$\begin{aligned} {{\widetilde{q}}}_{\xi ,\eta }= {\left\{ \begin{array}{ll} e^{\alpha \xi _i}, &{} \eta =\xi +\mathbf{e}_i, \\ e^{-\beta \sum _{j: j\sim i}\xi _j}, &{} \eta =\xi -\mathbf{e}_i,\, \text {if }\, \xi _i>0, \\ 0, &{} ||\eta -\xi ||\ne 1. \end{array}\right. } \end{aligned}$$
(1.3)
We show that similar results holds for this chain, although there is a minor difference.

We use essentially the method of electric networks in our proofs; this is possible since the CTMC \(\xi (t)\) is reversible (see Sect. 3.1). The use of reversibility was rather limited in [13], where the Lyapunov function method and direct probabilistic arguments were the main research techniques. In addition, we provide examples of alternative proofs of some of our results based on the Lyapunov function method and renewal theory for random walks. The advantage of these alternative methods is that they do not require reversibility and can be applied in more general situations. Therefore, the alternative proofs are of interest on their own right.

Remark 1.1

In the case \(\alpha =\beta =0\), all rates in (1.1) equal 1, and the Markov chain is a continuous-time version of a simple random walk on \({\mathbb {Z}}_+^n\). It is known that a simple random walk on the octant \({\mathbb {Z}}_+^n\) is null recurrent for \(n\le 2\) and transient for \(n\ge 3\); this is a variant of the corresponding well-known result for simple random walk on \({\mathbb {Z}}^n\), and can rather easily be shown using electric network theory, see Example 3.1 below.

Remark 1.2

We allow the graph G to be disconnected. However, there is no interaction between different components of G, and the CTMC \(\xi (t)\) consists of independent Markov chains defined by the connected components of G. Hence, the case of main interest is when G is connected.

Remark 1.3

The case when G has no edges is somewhat exceptional but also rather trivial, since then the value of \(\beta \) is irrelevant, and \(\xi (t)\) consists of n independent continuous-time random walks on \({\mathbb {Z}}_+\); in fact, \(\xi (t)\) then is as in the case \(\beta =0\) for any other G with n vertices. In particular, if G has no edges, we may assume \(\beta =0\).

Remark 1.4

CTMC \(\xi (t)\) is a model of interacting spins and, as such, is related to models of statistical physics. The stationary distribution of a finite Markov chain with bounded components and the same transition rates is of interest in statistical physics. In particular, if components take only values 0 and 1, then the stationary distribution of the corresponding Markov chain is equivalent to a special case of the famous Ising model. One of the main problems in statistical physics is to determine whether such a probability distribution is subject to phase transition as the underlying graph indefinitely expands. In the present paper, we keep the finite graph G fixed, but allow arbitrarily large components \(\xi _i\). We then study phase transitions of this model, in the sense discussed above.

2 The Main Results

In order to state our results, we need two definitions from graph theory. We also let e(G) denote the number of edges in G.

Definition 2.1

The eigenvalues of a finite graph G are the eigenvalues of its adjacency matrix A. These are real, since A is symmetric, and we denote them by \(\lambda _1(G)\ge \lambda _2(G)\ge \dots \ge \lambda _n(G)\), so that \(\lambda _1:=\lambda _1(G)\) is the largest eigenvalue.

Note that \(\lambda _1(G)>0\) except in the rather trivial case \(e(G)=0\) (see Remark 1.3).

Definition 2.2

  1. (i)

    An independent set of vertices in a graph G is a set of the vertices such that no two vertices in the set are adjacent.

     
  2. (ii)

    The independence number\(\kappa = \kappa (G)\) of a graph G is the cardinality of the largest independent set of vertices.

     

For example, if G is a cycle graph \({\mathsf {C}}_n\) with n vertices, then \(\kappa =\lfloor n/2 \rfloor \).

The main results of the paper are collected in the following theorem, which generalises results concerning positive recurrence of the Markov chain obtained in [13]. Note that the theorem includes the results for the hard-core case \(\beta =-\infty \) specified by the following modification of rates (1.2)
$$\begin{aligned} q_{\xi ,\eta }= {\left\{ \begin{array}{ll} e^{\alpha \xi _i}, &{} \eta =\xi +\mathbf{e}_i,\, \text {if }\, \xi _j=0, j\sim i,\\ 1, &{} \eta =\xi -\mathbf{e}_i,\, \text {if }\, \xi _i>0, \\ 0, &{} \text {otherwise}, \end{array}\right. } \end{aligned}$$
(2.1)
and discussed in more detail in Sect. 3.3.

Theorem 2.1

Let \(-\infty<\alpha <\infty \) and \(-\infty \le \beta <\infty \), and consider the CTMC \(\xi (t)\).
  1. (i)

    If \(\alpha <0\) and \(\alpha +\beta \lambda _1(G)<0\), then \(\xi (t)\) is positive recurrent.

     
  2. (ii)
    \(\xi (t)\) is null recurrent in the following cases:
    1. (a)

      \(\alpha =0\), \(\beta <0\) and \(\kappa (G)\le 2\),

       
    2. (b)

      \(\alpha =\beta =0\) and \(n\le 2\),

       
    3. (c)

      \(\alpha =0\), \(\beta >0\), \(e(G)=0\) and \(n\le 2\).

       
     
  3. (iii)
    In all other cases, \(\xi (t)\) is transient. This means the cases
    1. (a)

      \(\alpha >0\),

       
    2. (b)

      \(\alpha =0\), \(\beta >0\) and \(e(G)>0\),

       
    3. (c)

      \(\alpha =0\), \(\beta >0\), \(e(G)=0\) and \(n\ge 3\),

       
    4. (d)

      \(\alpha =\beta =0\) and \(n\ge 3\),

       
    5. (e)

      \(\alpha =0\), \(\beta <0\) and \(\kappa (G)\ge 3\),

       
    6. (f)

      \(\alpha <0\) and \(\alpha +\beta \lambda _1(G)\ge 0\).

       
     
Theorem 2.1 is summarized in the diagram in Fig. 1.
Fig. 1

The different phases for the CMTC \(\xi (t)\). (Ignoring a trivial exception if \(e(G)=0\), \(\alpha =0\) and \(\beta >0\), see Remark 1.3.)

Remark 2.1

Theorem 2.1 shows that the behaviour of the Markov chain has the following monotonicity property: if the Markov chain is transient for some given parameters \((\alpha _0, \beta _0)\), then it is also transient for all parameters \((\alpha , \beta )\) such that \(\alpha \ge \alpha _0\) and \(\beta \ge \beta _0\). This can also easily be seen directly using electric networks as in Sect. 3.2, see the proof of Lemma 4.8.

Remark 2.2

There is a vast literature devoted to a graph eigenvalues. In particular, there are well known bounds for the largest eigenvalue \(\lambda _1\). We give two simple examples where the largest eigenvalue \(\lambda _1\) easily can be computed explicitly, which allows us to rewrite the conditions of Theorem 2.1 in the case \(\alpha <0\) in more explicit form. These examples basically rephrase results previously obtained in [13, Theorems 4 and 6].

Example 2.1

Assume that G is a regular graph, i.e., a graph with constant vertex degrees \(\nu \), say. Then \(\lambda _1=\nu \). Hence, the Markov chain is positive recurrent if and only if \(\alpha <0\) and \(\alpha +\beta \nu <0\). If \(\alpha <0\) and \(\alpha +\beta \nu \ge 0\), then the Markov chain is transient.

Example 2.2

Assume that the graph G is a star \({\mathsf {K}}_{1,m}\) with \(m=n-1\) non-central vertices, where \(m\ge 1\). A direct computation gives that \(\lambda _1=\sqrt{m}\). Hence, the Markov chain is positive recurrent if and only if \(\alpha <0\) and \(\alpha +\beta \sqrt{m}<0\). If \(\alpha <0\) and \(\alpha +\beta \sqrt{m}\ge 0\), then the Markov chain is transient.

We consider also two examples with \(\alpha =0\) and \(\beta <0\), when the independence number \(\kappa (G)\) is crucial.

Example 2.3

Let, as in Example 2.2, G be a star \({\mathsf {K}}_{1,m}\), where \(m\ge 1\). Then \(\kappa (G)=m=n-1\). Assume that \(\alpha =0\) and \(\beta <0\). Then, the Markov chain is null recurrent if \(n\le 3\), and transient if \(n\ge 4\).

Example 2.4

Let G be a cycle \({\mathsf {C}}_{n}\), where \(n\ge 3\). Then \(\kappa (G)=\lfloor n/2\rfloor \). Assume that \(\alpha =0\) and \(\beta <0\). Then, the Markov chain is null recurrent if \(n\le 5\), and transient if \(n\ge 6\).

3 Preliminaries

3.1 Reversibility of the Markov Chain

Define the following function
$$\begin{aligned} \begin{aligned} W(\xi )&:= \frac{\alpha }{2}\sum \limits _{i=1}^n\xi _i(\xi _i-1)+\beta \sum _{i,j:\;i\sim j}\xi _i\xi _j\\&= \frac{1}{2}\langle (\alpha E+\beta A)\xi , \xi \rangle -\frac{\alpha }{2}S(\xi ), \qquad \xi =(\xi _1,...,\xi _n)\in {\mathbb {Z}}_{+}^n, \end{aligned} \end{aligned}$$
(3.1)
where the second sum is interpreted as the sum over unordered pairs \(\{i,j\}\) (i.e., a sum over the edges in G), \(\langle \cdot , \cdot \rangle \) is the Euclidean scalar product, E is the \(n\times n\) identity matrix, A is the adjacency matrix of the graph G and
$$\begin{aligned} S(\xi ):=\sum \limits _{i=1}^n\xi _i. \end{aligned}$$
(3.2)
A direct computation gives the detailed balance equation
$$\begin{aligned} e^{\alpha \xi _i+\beta (A\xi )_i}e^{W(\xi )}=e^{W\left( \xi +\mathbf{e}_i\right) }, \end{aligned}$$
(3.3)
for \(i=1,\dots , n\) and \(\xi \in {\mathbb {Z}}_{+}^{n}\). Note that, recalling (1.2), (3.3) is equivalent to the standard form of the balance equation
$$\begin{aligned} q_{\xi ,\eta } e^{W(\xi )} = q_{\eta ,\xi } e^{W(\eta )}, \qquad \xi ,\eta \in {\mathbb {Z}}_+^n. \end{aligned}$$
(3.4)
Hence, (3.3) means that the Markov chain is reversible with invariant measure \(\mu (\xi ):=e^{W(\xi )}\), \(\xi \in {\mathbb {Z}}_{+}^n\).

The explicit formula for the invariant measure \(\mu \) enables us to easily see when \(\mu \) is summable, and thus can be normalised to an invariant distribution (i.e., a probability measure); we return to this in Lemma 4.12.

Remark 3.1

Recall that a recurrent CTMC has an invariant measure that is unique up to a multiplicative constant, while a transient CTMC in general may have several linearly independent invariant measures (or none). We do not investigate whether the invariant measure \(\mu \) is unique (up to constant factors) for our Markov chain also in transient cases.

3.2 Electric Network Corresponding to the Markov Chain

Let us define the electric network on graph \({\mathbb {Z}}_{+}^n\) corresponding to the Markov chain of interest. According to the general method (e.g., see [2] or [6]) the construction goes as follows. First, suppose that \(\beta >-\infty \). Given \(\xi =(\xi _1,\dots , \xi _n)\in {\mathbb {Z}}_{+}^n\) replace each edge
$$\begin{aligned} \{\xi -\mathbf{e}_i, \xi \}=\left\{ (\xi _1,\xi _2,\dots , \xi _{i}-1,\dots , \xi _n), \, (\xi _1, \xi _2,\dots , \xi _i,\dots , \xi _n)\right\} , \quad i=1,\dots ,n, \end{aligned}$$
(assuming \(\xi _i\ge 1\)) by a resistor with conductance (resistance\({}^{-1}\)) equal to
$$\begin{aligned} C_{\xi -\mathbf{e}_i, \xi }:=e^{W(\xi )}. \end{aligned}$$
(3.5)
Note that \(C_{\xi -\mathbf{e}_i, \xi }\) does not depend on i in our case. Also, \(C_{{{\mathbf {0}}},{\mathbf{e}}_i}=e^{W(\mathbf{e}_i)}=1\), i.e., the edges connecting the origin \({{\mathbf {0}}}\) with \(\mathbf{e}_i\) have conductance 1, and thus resistance 1 (Ohm, say).

We denote the   network consisting of \({\mathbb {Z}}_+^n\) with the conductances (3.5) by \(\varGamma _{\alpha , \beta , G}\). Otherwise, we will for convenience sometimes denote an electric network by the same symbol as the underlying graph when it is clear from the context what the conductances are.

Let \(N(\varGamma )\) be an electric network on an infinite graph \(\varGamma \). The effective resistance \(R_\infty (\varGamma )=R_\infty (N(\varGamma ))\) of the network is defined, loosely speaking, as the resistance between some fixed point of \(\varGamma \), which in our case we choose as \({{\mathbf {0}}}\), and infinity (see e.g. [2, 6] or [8] for more details). Recall that a reversible Markov chain is transient if and only if the effective resistance of the corresponding electric network is finite. Equivalently, a reversible Markov chain is recurrent if and only if the effective resistance of the corresponding electric network is infinite.

A common approach to showing either recurrence or transience of a reversible Markov chain is based on Rayleigh’s monotonicity law. In particular, if \(N(\varGamma ')\) is a subnetwork of \(N(\varGamma )\), obtained by deleting some edges, then \(R_\infty (\varGamma )\le R_\infty (\varGamma ')\). Therefore, if \(R_\infty (\varGamma ')<\infty \), then \(R_\infty (\varGamma )<\infty \) as well, and thus the corresponding Markov chain on \(\varGamma \) is transient. Similarly, if the network \(N(\varGamma '')\) is obtained from \(N(\varGamma )\) by short-circuiting one or several sets of vertices, then \(R_\infty (\varGamma '')\le R_\infty (\varGamma )\). Hence, if \(R_\infty (\varGamma '')=\infty \), then \(R_\infty (\varGamma )=\infty \) as well, and the corresponding Markov chain on \(\varGamma \) is recurrent.

Example 3.1

We illustrate these methods, and give a flavour of later proofs, by showing how they work for a simple random walk (SRW) on \({\mathbb {Z}}_+^n\), which as said in Remark 1.1 is the special case \(\alpha =\beta =0\) of our model. The corresponding electric network has all resistances equal to 1.

First, we obtain a lower bound of \(R_\infty ({\mathbb {Z}}_+^n)\) by some short-circuiting. (See [2, p. 76], or the Nash-Williams criterion and Remark 2.10 in [8, pp. 37–38].) Let, recalling (3.2),
$$\begin{aligned} V_L:=\{\mathbf{x}\in {\mathbb {Z}}_+^n: S(\mathbf{x})=L\}, \qquad L=0,1,\dots , \end{aligned}$$
(3.6)
and let \(\varGamma ''\) be the network obtained from \({\mathbb {Z}}_+^n\) by short-circuiting each set \(V_L\) of vertices; we can regard each \(V_L\) as a vertex in \(\varGamma ''\). Then we have \(\asymp L^{n-1}\) resistors in parallel connecting \(V_{L-1}\) and \(V_L\). As a result, their conductances (i.e. inverse of resistance) sum up; hence the effective resistance \(R_L\) between \(V_{L-1}\) and \(V_L\) is \(\asymp \frac{1}{L^{n-1}}\). Now \(\varGamma ''\) consists of a sequence of resistors \(R_L\)in series, so we must sum them; consequently the resistance of the modified network is
$$\begin{aligned} R_\infty (\varGamma '')=\sum _{L=1}^\infty R_L\asymp \sum _{L=1}^{\infty } \frac{1}{L^{n-1}}. \end{aligned}$$
(3.7)
If \(n=1\) or \(n=2\), this sum is infinite and thus \(R_\infty ({\mathbb {Z}}_+^n)\ge R_\infty (\varGamma '')=\infty \); hence the SRW is recurrent.

On the other hand, if \(n\ge 3\), one can show that the random walk is transient. See, for example, the description of the tree \(NT_{2.5849}\) in [2, Sect. 2.2.9], or the construction of a flow with finite energy in [8, page 41] (there done for \({\mathbb {Z}}^n\), but works for \({\mathbb {Z}}_+^n\) too), for a direct proof that \(R_\infty ({\mathbb {Z}}_+^n)<\infty \). An alternative argument uses the well-known transience of SRW on \({\mathbb {Z}}^n\) (\(n\ge 3\)) as follows. Consider a unit current flow from \({{\mathbf {0}}}\) to infinity on \({\mathbb {Z}}^n\). By symmetry, for every vertex \((x_1,\dots ,x_n)\in {\mathbb {Z}}^n\), the potential is the same at all points \((\pm x_1,\dots ,\pm x_n)\). Hence we may short-circuit each such set without changing the effective resistance \(R_\infty \). The short-circuited network, \(\varGamma '\) say, is thus also transient. However, \(\varGamma '\) can be regarded as a network on \({\mathbb {Z}}_+^n\) where each edge has a conductance between 2 and \(2^n\) (depending only on the number of non-zero coordinates). Hence, by Rayleigh’s monotonicity law, \(R_\infty ({\mathbb {Z}}_+^n)\le 2^nR_\infty (\varGamma ')<\infty \), and thus the SRW is transient.

3.3 The Hard-Core Interaction

Let us discuss in more detail the model with hard-core interaction, i.e. \(\beta =-\infty \). Then a component \(\xi _i\) can increase only when \(\xi _j=0\) for every \(j\sim i\), and it follows that the set
$$\begin{aligned} \varGamma _0:=\{\xi \in {\mathbb {Z}}_{+}^{n}: \xi _i \xi _j=0 \text { when } i\sim j\} \end{aligned}$$
(3.8)
is absorbing, i.e., if the Markov chain \(\xi (t)\) reaches \(\varGamma _0\), then it will stay there forever. In particular, if the chain starts at \({{\mathbf {0}}}\), then it will stay in \(\varGamma _0\). Moreover, it is easy to see that given any initial state, the process will a.s. reach \(\varGamma _0\) at some time (and then thus stay in \(\varGamma _0\)). Hence, any state \(\xi \in {\mathbb {Z}}_{+}^n\setminus \varGamma _0\) (i.e., with at least two neighbouring non-zero components) is a non-essential state, and the long-term behaviour of \(\xi (t)\) depends only on its behaviour on \(\varGamma _0\).

Therefore, in the hard-core case we consider the Markov chain with the state space \(\varGamma _0\). This chain on \(\varGamma _0\) is easily seen to be irreducible.

Note that \(\varGamma _0\) is the set of configurations such that \(\langle A\xi ,\xi \rangle =0\), where A is the adjacency matrix of graph G. Equivalently, a configuration \(\xi \) belongs to \(\varGamma _0\) if and only if the set \(\{i:\xi _i>0\}\) is an independent set of vertices in G (see Definition 2.2).

Remark 3.2

In the special case \(\alpha =0\), the Markov chain with the hard-core interaction \(\beta =-\infty \) can be regarded as a simple symmetric random walk on the subgraph \(\varGamma _0\subseteq {\mathbb {Z}}_+^n\). In this special case, (3.1) yields \(W(\xi )=0\) for every \(\xi \in \varGamma _0\), so by (3.5), the conductance of every edge in \(\varGamma _0\) is 1. We may also regard this network as a network on \({\mathbb {Z}}_+^n\) with the conductance for edge \(\{\xi -\mathbf{e}_k, \xi \}\) defined by
$$\begin{aligned} C_{\xi -\mathbf{e}_k, \xi }= {\left\{ \begin{array}{ll} 1,&{} \text { if } \sum \nolimits _{i,j:\;i\sim j}\xi _i\xi _j=0,\\ 0, &{} \text { if } \sum \nolimits _{i,j:\;i\sim j}\xi _i\xi _j\ne 0, \end{array}\right. } \end{aligned}$$
(3.9)
where the second case simply means that the edge is not wired.

4 Proof of Theorem 2.1

In this section we prove Theorem 2.1 by proving a long series of lemmas treating different cases. Note that we include the hard-core case \(\beta =-\infty \). (For emphasis we say this explicitly each time it may occur.) Recall that \(\varGamma _{\alpha , \beta , G}\) denotes \({\mathbb {Z}}_+^n\) regarded as an electrical network with conductances (3.5) corresponding to the CTMC \(\xi (t)\).

As a first application of the method of electric networks we treat the case \(\alpha >0\).

Lemma 4.1

If \(\alpha >0\) and \(-\infty \le \beta <\infty \), then the CTMC \(\xi (t)\) is transient.

Proof

Consider the subnetwork of \(\varGamma _{\alpha , \beta , G}\) consisting of the axis \(\varGamma '={\mathbb {Z}}_+\mathbf{e}_1=\{\mathbf{x}\in {\mathbb {Z}}_{+}^n: x_i=0,\, i\ne 1\}\). For \(k\ge 1\), the conductance on the edge connecting \((k-1)\mathbf{e}_1=(k-1, 0,\dots ,0)\) and \(k\mathbf{e}_1=(k, 0,\dots , 0)\) is by (3.5) and (3.1) equal to
$$\begin{aligned} e^{W(k\mathbf{e}_1)}=e^{\frac{\alpha }{2}k(k-1)}; \end{aligned}$$
(4.1)
hence the resistance is \(e^{-\frac{\alpha }{2}k(k-1)}\). Since the resistors in \(\varGamma '\) are connected in series, the effective resistance of this subnetwork is
$$\begin{aligned} R_\infty (\varGamma ')=\sum \limits _{k=1}^{\infty }e^{-\frac{\alpha }{2}{k(k-1)}}<\infty , \end{aligned}$$
(4.2)
as \(\alpha >0\). Therefore, the effective resistance of the original network \(\varGamma _{\alpha , \beta , G}\) is also finite. Consequently, the Markov chain is transient. \(\square \)

We give similar arguments for the other transient cases. Recall that A is a non-negative symmetric matrix with eigenvalues \(\lambda _1,\dots ,\lambda _n\). Thus there exists an orthonormal basis of eigenvectors \(\mathbf{v}_i\) with \(A\mathbf{v}_i=\lambda _i\mathbf{v}_i\), \(i=1,\ldots , n\). By the Perron–Frobenius theorem \(\mathbf{v}_1\) can be chosen non-negative, i.e. \(\mathbf{v}_1\in {{\mathbb {R}}}_{+}^n\). (If G is connected, then \(\mathbf{v}_1\) is unique and strictly positive.)

Lemma 4.2

If \(\alpha <0\) and \(\alpha +\beta \lambda _1\ge 0\), then the CTMC \(\xi (t)\) is transient.

Proof

For each \(t\ge 0\), define \(x(t):=t\mathbf{v}_1\) and \(y(t):=(\lfloor x_1(t) \rfloor , \ldots , \lfloor x_n(t) \rfloor )\). By construction, y(t) is piecewise constant. Let \(y_0=0, y_1, y_2,\ldots \) be the sequence of different values of y(t), where at each t such that two or more coordinates of y(t) jump simultaneously, we insert intermediate vectors, so that only one coordinate changes at a time, and \(||y_{k+1}-y_k||=1\) for all k. Then \(S(y_k)\), the sum of coordinates of \(y_k\), is equal to k, and thus \(k/n\le ||y_k||\le k\). Furthermore, for each k there is a \(t_k\) such that \(||y_k-y(t_k)||\le n\), and thus
$$\begin{aligned} ||y_k-t_k\mathbf{v}_1||=||y_k-x(t_k)||=O(1). \end{aligned}$$
(4.3)
Express \(y_k\) in the basis \(\mathbf{v}_1,\ldots ,\mathbf{v}_n\) as \(y_k=\sum _{i=1}^na_{k,i}{} \mathbf{v}_i\); then (4.3) implies \(a_{k,i}=O(1)\) for \(i\ne 1\). Thus
$$\begin{aligned} \langle (\alpha E+\beta A)y_k, y_k\rangle =\sum \limits _{i=1}^n(\alpha +\beta \lambda _i)a^2_{k,i}=(\alpha +\beta \lambda _1)a_{k, 1}^2+O(1)\ge O(1), \end{aligned}$$
(4.4)
since \(\alpha +\beta \lambda _1\ge 0\) by assumption. Therefore, by (3.1),
$$\begin{aligned} W(y_k)\ge -\frac{\alpha }{2}S(y_k)+O(1)=\frac{|\alpha |}{2}k+O(1). \end{aligned}$$
(4.5)
Consider the subnetwork \(\varGamma '\subset \varGamma _{\alpha , \beta , G}\) formed by the vertices \(\{y_k\}\). The resistance of the edge connecting \(y_{k-1}\) and \(y_k\) is equal to
$$\begin{aligned} R_k=e^{-W(y_k)}\le Ce^{- \frac{|\alpha |}{2}k}, \end{aligned}$$
(4.6)
so that the effective resistance of the subnetwork \(R_\infty (\varGamma ')=\sum _{k}R_k<\infty \). Hence \(R_\infty (\varGamma _{\alpha , \beta , G})<\infty \) and the Markov chain is transient. \(\square \)

Lemma 4.3

If \(\alpha =0\), \(\beta >0\) and \(e(G)>0\), then the CTMC \(\xi (t)\) is transient.

Proof

We do exactly as in the proof of Lemma 4.2 up to (4.4). Now \(\alpha =0\), so (4.5) is no longer good enough. Instead we note that (4.3) implies \(a_{k,1}=t_k+O(1)\) and also \(k=S(y_k)=S(x(t_k))+O(1)=Ct_k+O(1)\), where \(C=S(\mathbf{v}_1)>0\). Thus, with \(c=C^{-1}\),
$$\begin{aligned} a_{k,1}= t_k+O(1) =ck+O(1). \end{aligned}$$
(4.7)
Furthermore, \(\lambda _1>0\) since \(e(G)>0\), and thus (3.1), (4.4) and (4.7) yield, recalling \(\alpha =0\),
$$\begin{aligned} W(y_k) =\frac{1}{2} \beta \lambda _1(ck+O(1))^2+O(1) \ge c_1k^2 \end{aligned}$$
(4.8)
for some \(c_1>0\) and all large k.

It follows again that the subnetwork \(\varGamma ':=\{y_k\}\) has finite effective resistance, and thus the Markov chain is transient. \(\square \)

Alternatively, several other choices of paths \(\{y_k\}\) could have been used in the proof of Lemma 4.2, for example \(\{(k,1,0,\dots ,0):k\ge 0\}\).

Lemma 4.4

If \(\alpha =0\), \(\beta =0\) and \(n\ge 3\), then the CTMC \(\xi (t)\) is transient.

Proof

As said in Remark 1.1 and Example 3.1, in this case, the Markov chain is just simple random walk on \({\mathbb {Z}}_+^n\), which is transient for \(n\ge 3\). \(\square \)

Lemma 4.5

If \(\alpha =0\), \(\beta >0\), \(e(G)=0\) and \(n\ge 3\), then the CTMC \(\xi (t)\) is transient.

Proof

When \(e(G)=0\), the parameter \(\beta \) is irrelevant and may be changed to 0. The result thus follows from Lemma 4.4. \(\square \)

Lemma 4.6

If \(\alpha =0\), \(\beta \ge -\infty \) and \(\kappa \ge 3\), then the CTMC \(\xi (t)\) is transient.

Proof

Since \(\kappa \ge 3\), there are three vertices of the graph G not adjacent to each other; w.l.o.g. let them be 1, 2 and 3. Consider the subnetwork
$$\begin{aligned} \varGamma ':={\mathbb {Z}}_+^3\times \{0\}^{n-3}=\{(\xi _1,\xi _2,\xi _3,0,\dots ,0)\} \subset \varGamma _0\subset \varGamma _{\alpha , \beta , G}={\mathbb {Z}}_+^n. \end{aligned}$$
(4.9)
By (3.1), we have in this case \(W(\xi )=0\) for every \(\xi \in \varGamma '\), and thus (3.5) implies that in the corresponding electrical network all edges in \(\varGamma '\) have conductance 1, and thus resistance 1. Hence, the Markov chain corresponding to the network \(\varGamma '\) is simple random walk on \(\varGamma '\cong {\mathbb {Z}}_+^3\). By Remark 1.1 and Example 3.1, a simple random walk on the octant \({\mathbb {Z}}_+^3\) is transient, and thus \(R_\infty (\varGamma ')=R_\infty ({\mathbb {Z}}_+^3)<\infty \). Consequently, \(R_\infty (\varGamma _{\alpha , \beta , G})\le R_\infty (\varGamma ')<\infty \), and thus the Markov chain is transient. \(\square \)

We turn to proving recurrence in the remaining cases.

Lemma 4.7

If \(\alpha <0\), \(\alpha +\beta \lambda _1<0\) and \(\beta \ge 0\), then the CTMC \(\xi (t)\) is recurrent.

Proof

Let \(\delta =-(\alpha +\beta \lambda _1)>0\). The eigenvalues of the symmetric matrix \(\alpha E+\beta A\) are \(\alpha +\beta \lambda _i\le \alpha +\beta \lambda _1=-\delta \), \(i=1,\dots ,n\). Thus, by (3.1),
$$\begin{aligned} W(\xi ) = \frac{1}{2}\langle (\alpha E+\beta A)\xi , \xi \rangle -\frac{\alpha }{2}S(\xi ) \le -\frac{\delta }{2}\langle \xi ,\xi \rangle +\frac{|\alpha |}{2}S(\xi ) . \end{aligned}$$
(4.10)
We now argue as in Example 3.1. Let again \(V_L\) be defined by (3.6), and let \(\varGamma ''\) be the network obtained from \(\varGamma _{\alpha , \beta , G}\) by short-circuiting each set \(V_L\) of vertices. For \(\xi \in V_L\), we have by the Cauchy–Schwarz inequality \(L^2=S(\xi )^2\le n\langle \xi ,\xi \rangle \), and thus by (4.10) and (3.5), the conductance
$$\begin{aligned} C_{\xi -\mathbf{e}_i, \xi } = e^{W(\xi )} \le e^{-\frac{\delta }{2n}L^2+\frac{|\alpha |}{2}L} \le Ce^{-cL^2} \end{aligned}$$
(4.11)
for some positive constants cC.
For \(L\ge 1\), there are \(O(L^{n-1})\) vertices in \(V_L\), and thus \(O(L^{n-1})\) edges between \(V_{L-1}\) and \(V_L\). When short-circuiting each \(V_L\), we can regard each \(V_L\) as a single vertex in \(\varGamma ''\); the edges between\(V_{L-1}\) and \(V_L\) then become parallel, and can be combined into a single edge between \(V_{L-1}\) and \(V_L\). The conductance, \(C_L\) say, of this edge is obtained by summing the conductances of all edges between \(V_{L-1}\) and \(V_L\) (since they are in parallel), and thus
$$\begin{aligned} C_L = O\bigl (L^{n-1}\bigr )\cdot O\bigl (e^{-cL^2}\bigr ) = O(1). \end{aligned}$$
(4.12)
Consequently, the resistances \(C_L^{-1}\) are bounded below, and since \(\varGamma ''\) is just a path with these resistances in series,
$$\begin{aligned} R_\infty (\varGamma '')=\sum _{L=1}^\infty C_L^{-1}=\infty . \end{aligned}$$
(4.13)
As explained in Sect. 3.2, this implies that \(R_\infty (\varGamma _{\alpha , \beta , G})=\infty \) and that the Markov chain \(\xi (t)\) is recurrent. \(\square \)

Lemma 4.8

If \(\alpha <0\), \(\alpha +\beta \lambda _1<0\) and \(-\infty \le \beta \le 0\), then the CTMC \(\xi (t)\) is recurrent.

Proof

We use monotonicity. If we replace \(\beta \) by 0, then Lemma 4.7 applies; consequently, \(R_\infty (\varGamma _{\alpha , 0, G})=\infty \). On the other hand, if \(W_0(\xi )\) is defined by (3.1) with \(\beta \) replaced by 0, then \(W(\xi )\le W_0(\xi )\) (since \(\beta \le 0\)), and thus by (3.5), each edge in \(\varGamma _{\alpha , \beta , G}\) has at most the same conductivity as in \(\varGamma _{\alpha , 0, G}\). Equivalently, each resistance is at least as large in \(\varGamma _{\alpha , \beta , G}\) as in \(\varGamma _{\alpha , 0, G}\), and thus by Rayleigh’s monotonicity law, \(R_\infty (\varGamma _{\alpha , \beta , G})\ge R_\infty (\varGamma _{\alpha , 0, G})=\infty \). Hence, the Markov chain is recurrent. \(\square \)

Lemma 4.9

If \(\alpha =0\), \(\beta =0\) and \(n\le 2\), then the CTMC \(\xi (t)\) is recurrent.

Proof

See Remark 1.1 and Example 3.1. \(\square \)

Lemma 4.10

If \(\alpha =0\), \(-\infty \le \beta <0\) and \(\kappa \le 2\), then the CTMC \(\xi (t)\) is recurrent.

Proof

We assume that \(n\ge 3\); the case \(n\le 2\) follows by a simpler version of the same argument (taking \(u=0\) below), or by Lemma 4.9 and Rayleigh’s monotonicity law as in the proof of Lemma 4.8.

The assumption \(\kappa \le 2\) implies that amongst any three vertices of the graph there are at least two which are connected by an edge.

Let \(b:=-\beta >0\). Then, since \(\alpha =0\), (3.1) yields
$$\begin{aligned} W(\mathbf{x})=-\frac{b}{2}\langle A\mathbf{x},\mathbf{x}\rangle =-b\sum _{i,j:\;i\sim j}x_ix_j,\qquad \mathbf{x}=(x_1,...,x_n)\in {\mathbb {Z}}_{+}^n. \end{aligned}$$
(4.14)
Let again \(V_L\) be defined by (3.6), short-circuit all the vertices within each \(V_L\), and denote the resulting network by \(\varGamma ''\). We can regard each \(V_L\) as a vertex of \(\varGamma ''\).
Fix \(L\in {\mathbb {Z}}_+\) and consider \(\mathbf{x}=(x_1,\dots ,x_n)\in V_L\). Let us order the components of \(\mathbf{x}\) in decreasing order: \(x_{(1)}\ge x_{(2)} \ge x_{(3)}\ge \dots \ge x_{(n)}\ge 0\). Denote \(x_{(3)}=u\); then, by construction, \(u\in \{0,1,\dots ,\lfloor {L/3}\rfloor \}\). Among the three vertices corresponding to \(x_{(1)},x_{(2)},x_{(3)}\) at least two are connected, so that we can bound
$$\begin{aligned} W(\mathbf{x})=-b\sum _{i,j:\;i\sim j}x_i x_j\le -bu^2. \end{aligned}$$
(4.15)
Hence, by (3.5), the conductance of each of the resistors coming to \(\mathbf{x}\) from \(V_{L-1}\) is bounded above by \(e^{-b u^2}\). Next, the number of such \(\mathbf{x}\in V_L\) with \(x_{(3)}=u\) is bounded by \(n!\, (u+1)^{n-3}L\), as there are at most \(u+1\) possibilities for each of \(x_{(4)},x_{(5)},\dots ,x_{(n)}\), at most L possibilities for \(x_{(2)}\) and then \(x_{(1)}=L-\sum _{i\ge 2}^n x_{(i)}\) is determined, and there are at most n! different orderings of \(x_i\) for each \(x_{(1)},\dots ,x_{(n)}\).
All these resistors are in parallel, so we sum their conductance to get an effective conductance between \(V_{L-1}\) and \(V_L\), which is thus bounded above by
$$\begin{aligned} n!\,L \sum _{u=1}^L (u+1)^{n-3} e^{-b u^2} \le C(n, b)L, \end{aligned}$$
(4.16)
for some \(C(n,b)<\infty \). (Thus, the conductance between \(V_{L-1}\) and \(V_L\) is of the same order as in the case \({\mathbb {Z}}_+^2\) in Example 3.1.) Hence, the effective resistance \(R_L\) between \(V_{L-1}\) and \(V_L\) is bounded below by \(cL^{-1}\), and thus
$$\begin{aligned} R_\infty (\varGamma '')=\sum _{L=1}^\infty R_L \ge c\sum _{L=1}^\infty \frac{1}{L}=\infty . \end{aligned}$$
(4.17)
Finally, \(R_\infty (\varGamma _{\alpha , \beta , G})\ge R_\infty (\varGamma '')=\infty \), and the chain is therefore recurrent. \(\square \)

Lemma 4.11

If \(\alpha =0\), \(\beta >0\), \(e(G)=0\) and \(n\le 2\), then the CTMC \(\xi (t)\) is recurrent.

Proof

Since \(e(G)=0\), we may replace \(\beta \) by 0; the result then follows from Lemma 4.9. \(\square \)

This completes the classification of transient and recurrent cases. We proceed to distinguish between positive recurrent and null recurrent cases; we do this by analysing the invariant measure \(\mu (\xi )=e^{W(\xi )}\), and in particular its total mass
$$\begin{aligned} Z_{\alpha , \beta , G}:=\sum \limits _{\xi \in {\mathbb {Z}}_{+}^n} e^{W(\xi )}=\sum \limits _{\xi \in {\mathbb {Z}}_{+}^n} e^{\frac{1}{2}\langle (\alpha E+\beta A)\xi , \xi \rangle -\alpha S(\xi )}\le \infty . \end{aligned}$$
(4.18)
Note that if \(Z=Z_{\alpha , \beta , G}<\infty \), then the invariant measure \(\mu \) can be normalised to an invariant distribution \(Z^{-1}e^{W(\xi )}\). Furthermore, recall that an irreducible CTMC is positive recurrent if and only if it has an invariant distribution and is non-explosive.

Remark 4.1

In general, a CTMC may have an invariant distribution and be explosive (and thus transient), see e.g. [10, Sect. 3.5]; we will see that this does not happen in our case. In other words, our CTMC is positive recurrent exactly when \(Z_{{\alpha , \beta , G}}<\infty \). See also Sect. 5.

Lemma 4.12

Let \(-\infty<\alpha <\infty \) and \(-\infty \le \beta <\infty \). Then \(Z_{{\alpha , \beta , G}}<\infty \) if and only if \(\alpha <0\) and \(\alpha +\beta \lambda _1<0\).

Proof

We consider four different cases.

Case 1:\(\alpha \ge 0\). By (4.1), \(e^{W(k\mathbf{e}_1)}=e^{\frac{\alpha }{2}k(k-1)}\ge 1\), and thus \(Z_{{\alpha , \beta , G}}\ge \sum _{k=1}^\infty e^{W(k\mathbf{e}_1)}=\infty .\)

Case 2:\(\alpha <0\) and \(\alpha +\beta \lambda _1\ge 0\). Let \(y_k\) be as in Lemma 4.2. Then (4.5) applies and implies in particular \(W(y_k)\ge -C\) for some constant C. Hence,
$$\begin{aligned} Z_{{\alpha , \beta , G}}\ge \sum _{k=1}^\infty e^{W(y_k)} \ge \sum _{k=1}^\infty e^{-C}=\infty . \end{aligned}$$
(4.19)
Case 3:\(\alpha <0\), \(\alpha +\beta \lambda _1<0\) and \(\beta \ge 0\). The estimate (4.11) applies for every \(\xi \in V_L\), and since the number of vertices in \(V_L\) is \(O(L^{n-1})\) for \(L\ge 1\), we have
$$\begin{aligned} Z_{{\alpha , \beta , G}}= 1+\sum _{L=1}^\infty \sum _{\xi \in V_L}e^{W(\xi )} \le 1+\sum _{L=1}^\infty C_1 L^{n-1} e^{-cL^2}<\infty . \end{aligned}$$
(4.20)
Case 4:\(\alpha <0\), \(\alpha +\beta \lambda _1<0\) and \(-\infty \le \beta \le 0\). We use monotonicity as in the proof of Lemma 4.8. Let again \(W_0(\xi )\) be given by (3.1) with \(\beta \) replaced by 0. Then, since \(\beta \le 0\), \(W(\xi )\le W_0(\xi )\) and thus \(Z_{{\alpha , \beta , G}}\le Z_{{\alpha , 0, G}}\). Furthermore, \(Z_{{\alpha , 0, G}}<\infty \) by Case 3. Hence, \(Z_{{\alpha , \beta , G}}<\infty \). \(\square \)

Lemma 4.13

  1. (i)

    If \(\alpha <0\) and \(\alpha +\beta \lambda _1<0\), then the CTMC \(\xi (t)\) is positive recurrent.

     
  2. (ii)

    If \(\alpha =0\), \(-\infty \le \beta <0\) and \(\kappa \le 2\), then the CTMC \(\xi (t)\) is null recurrent.

     
  3. (iii)

    If \(\alpha =0\), \(\beta =0\) and \(n\le 2\), then the CTMC \(\xi (t)\) is null recurrent.

     
  4. (iv)

    If \(\alpha =0\), \(\beta >0\), \(e(G)=0\) and \(n\le 2\), then the CTMC \(\xi (t)\) is null recurrent.

     

Proof

In all four cases, the Markov chain is recurrent, by Lemmas 4.7, 4.84.94.104.11. Hence the chain is non-explosive, and the invariant measure is unique up to a constant factor; furthermore, the chain is positive recurrent if and only if this measure has finite total mass so that there exists an invariant distribution. In other words, in these recurrent cases, the chain is positive recurrent if and only if \(Z_{{\alpha , \beta , G}}<\infty \). By Lemma 4.12, this holds in case (i), but not in (ii)–(iv). \(\square \)

Proof of Theorem 2.1

The theorem follows by collecting Lemmas 4.14.6 and 4.13. \(\square \)

5 The Corresponding Discrete Time Markov Chain

In this section we consider the discrete time Markov chain (DTMC) \(\zeta (t)\in {\mathbb {Z}}_{+}^n\) that corresponds to the CTMC \(\xi (t)\), i.e. the corresponding embedded DTMC. Note that we use t to denote both the continuous and the discrete time, although the two chains are related by a random change of time.

Recall that the transition probabilities of DTMC \(\zeta (t)\) are proportional to corresponding transition rates of CTMC \(\xi (t)\). Thus, if the rates of \(\xi (t)\) are \(q_{\xi ,\eta }\), given by (1.2), and \(C_{\xi ,\eta }=C_{\eta ,\xi }\) are the conductances given by (3.5) (with \(C_{\xi ,\eta }=0\) if \(||\xi -\eta ||\ne 1\)), and further \(q_\xi :=\sum _{\eta \sim \xi }q_{\xi ,\eta }\) and \(C_\xi :=\sum _{\eta \sim \xi }C_{\xi ,\eta }\), then the transition probabilities of \(\zeta (t)\) are
$$\begin{aligned} p_{\xi ,\eta }:=\frac{q_{\xi ,\eta }}{q_\xi }=\frac{C_{\xi ,\eta }}{C_\xi }. \end{aligned}$$
(5.1)
It is obvious that a CTMC is irreducible if and only if the corresponding DTMC is, and it is easy to see that the same holds for reversibility. Similarly, since a CTMC and the corresponding DTMC pass through the same states (with a random change of time parameter), if one is recurrent [or transient], then so is the other. However, in general, since the two chains pass through the states at different speeds, one of the chains may be positive recurrent and the other null recurrent. (Recall that many different CTMC have the same embedded DTMC, and that some of them may be positive recurrent and others not.) In our case, there is no such complication.

Theorem 5.1

The conclusions in Theorem 2.1 hold also for the DTMC \(\zeta (t)\).

Before proving the theorem, we note that it follows from (5.1) that the DTMC \(\zeta (t)\) is reversible with an invariant measure
$$\begin{aligned} {{\widehat{\mu }}}(\xi ):=C_\xi . \end{aligned}$$
(5.2)
We denote the total mass of this invariant measure by
$$\begin{aligned} \begin{aligned} {{\widehat{Z}}}_{\alpha , \beta , G}&:=\sum _{\xi \in {\mathbb {Z}}_+^n} C_\xi =\sum _{\xi }\sum _{\eta :\,\eta \sim \xi }C_{\xi ,\eta }\\&=2\sum _\xi \sum _{i:\,\xi _i>0} C_{\xi ,\xi -\mathbf{e}_i} =2\sum _\xi |\{i:\xi _i>0\}| e^{W(\xi )}. \end{aligned} \end{aligned}$$
(5.3)
Consequently,
$$\begin{aligned} Z_{{\alpha , \beta , G}}-1 \le {{\widehat{Z}}}_{\alpha , \beta , G} \le 2nZ_{{\alpha , \beta , G}}. \end{aligned}$$
(5.4)

Lemma 5.1

Let \(-\infty<\alpha <\infty \) and \(-\infty \le \beta <\infty \). Then \({{\widehat{Z}}}_{{\alpha , \beta , G}}<\infty \) if and only if \(\alpha <0\) and \(\alpha +\beta \lambda _1<0\).

Proof

Immediate by (5.4) and Lemma 4.12. \(\square \)

Proof of Theorem 5.1

As said above, \(\zeta (t)\) is transient precisely when \(\xi (t)\) is. A DTMC is positive recurrent if and only if it has an invariant distribution, and then every invariant measure is a multiple of the stationary distribution. Hence, \(\zeta (t)\) is positive recurrent if and only if the invariant measure \({{\widehat{\mu }}}(\xi )\) has finite mass, i.e., if \({{\widehat{Z}}}_{{\alpha , \beta , G}}<\infty \). Lemma 5.1 shows that this holds precisely in case (i) of Theorem 2.1, i.e., when \(\xi (t)\) is positive recurrent. \(\square \)

Remark 5.1

We can use the DTMC \(\zeta (t)\) to give an alternative proof of Lemma 4.13(i) without Lemmas 4.74.8. Assume \(\alpha <0\) and \(\alpha +\beta \lambda _1<0\). Then, by Lemma 5.1, \({{\widehat{Z}}}_{{\alpha , \beta , G}}<\infty \). Hence, the DTMC \(\zeta (t)\) has a stationary distribution and is thus positive recurrent. (Recall that this implication holds in general for a DTMC, but not for a CTMC, see Remark 4.1.) Hence \(\xi (t)\) is recurrent, and thus non-explosive. Furthermore, Lemma 4.12 shows that also \(Z_{{\alpha , \beta , G}}<\infty \), and thus also \(\xi (t)\) has a stationary distribution. Since \(\xi (t)\) is non-explosive, this implies that \(\xi (t)\) is positive recurrent.

6 Explosions

It was shown in [13] that in most of the transient cases in Theorem 2.1, the CTMC \(\xi (t)\) is explosive. (Recall that a recurrent CTMC is non-explosive.) We complement this by exhibiting in Lemma 6.1 one non-trivial transient case where \(\xi (t)\) is non-explosive.

Recall also the standard fact that if, as above, \(q_\xi :=\sum _\eta q_{\xi ,\eta }\) is the total rate of leaving \(\xi \), and \(\zeta (t)\) is the DTMC in Sect. 5, then \(\xi (t)\) is explosive if and only if \(\sum _{t=1}^\infty q_{\zeta (t)}^{-1}<\infty \) with positive probability. In particular, \(\xi (t)\) is non-explosive when the rates \(q_{\xi }\) are bounded.

Combining these results, we obtain the following partial classification, proved later in this section. Let \(\nu _i\) denote the degree of vertex \(i\in G\), and note that
$$\begin{aligned} \min _i\nu _i \le \lambda _1\le \max _i\nu _i. \end{aligned}$$
(6.1)

Theorem 6.1

Let \(-\infty<\alpha <\infty \) and \(-\infty \le \beta <\infty \), and consider the CTMC \(\xi (t)\).
  1. (i)
    \(\xi (t)\) is non-explosive in the following cases:
    1. (a)

      \(\alpha <0\) and \(\alpha +\beta \lambda _1(G)\le 0\),

       
    2. (b)

      \(\alpha =0\) and \(\beta \le 0\),

       
    3. (c)

      \(\alpha =0\), \(\beta >0\) and \(e(G)=0\).

       
     
  2. (ii)
    \(\xi (t)\) explodes a.s. in the following cases:
    1. (a)

      \(\alpha >0\),

       
    2. (b)

      \(\alpha =0\), \(\beta >0\) and \(e(G)>0\),

       
    3. (c)

      \(\alpha <0\) and \(\alpha +\beta \min _i \nu _i>0\).

       
     

Remark 6.1

Theorem 6.1 gives a complete characterization of explosions when the graph G is regular, i.e., \(\nu _i\) is constant, since then \(\min _i \nu _i=\lambda _1\), see (6.1).

For other graphs G, Theorem 6.1 leaves one case open, viz.
$$\begin{aligned} \alpha<0 \quad \text {and}\quad \alpha +\beta \min _i \nu _i\le 0 <\alpha +\beta \lambda _1(G) \end{aligned}$$
(6.2)
(and, as a consequence, \(\beta >0\)). We conjecture that \(\xi (t)\) always is explosive in this case, but leave this as an open problem. (Our intuition is that in this case, which is transient by Theorem 2.1, \(\xi (t)\) will tend to infinity along a path that stays rather close to the line \(\{s \mathbf{v}_1:s\in {{\mathbb {R}}}\}\) in \({{\mathbb {R}}}^n\), and that the rates \(q_\xi \) are exponentially large close to this line.)

Lemma 6.1

If \(\alpha <0\) and \(\alpha +\beta \lambda _1(G)=0\), then the CTMC \(\xi (t)\) is transient and non-explosive.

We prove first an elementary lemma.

Lemma 6.2

Define the functions \(\phi ,\psi :{{\mathbb {R}}}\rightarrow {{\mathbb {R}}}\) and \(\varPhi ,\varPsi :{{\mathbb {R}}}^n\rightarrow {{\mathbb {R}}}\) by, with \(\mathbf{u}=(u_1,\dots ,u_n)\),
$$\begin{aligned} \phi (u)&:=e^u+1,&\psi (u)&:=u(e^u-1), \end{aligned}$$
(6.3)
$$\begin{aligned} \varPhi (\mathbf{u})&:=\sum _{i=1}^n\phi (u_i),&\varPsi (\mathbf{u})&:=\sum _{i=1}^n\psi (u_i). \end{aligned}$$
(6.4)
Then \(\varPsi (\mathbf{u})/\varPhi (\mathbf{u})\rightarrow +\infty \) as \(||\mathbf{u}||\rightarrow \infty \).

Proof

Note that \(\phi (u)>0\) and \(\psi (u)\ge 0\) for all \(u\in {{\mathbb {R}}}\), and that \(\psi (u)/\phi (u)\rightarrow +\infty \) as \(u\rightarrow \pm \infty \).

Fix \(B>0\). Then \(\psi (u)-B\phi (u)>0\) if |u| is large enough, and thus there exists a constant \(C=C(B)\ge 0\) such that \(\psi (u)-B\phi (u) \ge -C\) for all \(u\in {{\mathbb {R}}}\). Consequently, for any \(\mathbf{u}\in {{\mathbb {R}}}^n\),
$$\begin{aligned} B\varPhi (\mathbf{u})=\sum _{i=1}^nB\phi (u_i) \le \sum _{i=1}^n\bigl (\psi (u_i)+C\bigr ) =\varPsi (\mathbf{u})+nC. \end{aligned}$$
(6.5)
Furthermore, \(\psi (u)\rightarrow +\infty \) as \(u\rightarrow \pm \infty \), and thus \(\varPsi (\mathbf{u})\rightarrow +\infty \) as \(||\mathbf{u}||\rightarrow \infty \). Consequently, there exists \(M=M(B)\) such that if \(||\mathbf{u}||>M\), then \(\varPsi (\mathbf{u})>nC\), and hence, by (6.5), \(B\varPhi (\mathbf{u})< 2 \varPsi (\mathbf{u})\), i.e., \(\varPsi (\mathbf{u})/\varPhi (\mathbf{u})>B/2\). Since B is arbitrary, this completes the proof. \(\square \)

Proof of Lemma 6.1

The CMTC \(\xi (t)\) is transient by Theorem 2.1(iii)(f) (Lemma 4.2). Let
$$\begin{aligned} Q(\mathbf{x}):=\frac{1}{2}\langle (\alpha E+\beta A)\mathbf{x},\mathbf{x}\rangle , \qquad \mathbf{x}\in {{\mathbb {R}}}^n, \end{aligned}$$
(6.6)
be the quadratic part of \(W(\mathbf{x})\) in (3.1). Let, as in Sect. 4, \(\mathbf{v}_1,\dots ,\mathbf{v}_n\) be an orthonormal basis of eigenvectors of A with \(A\mathbf{v}_k=\lambda _k\mathbf{v}_k\). The assumptions imply \(\beta >0\) and thus, for any \(k\le n\), \(\alpha +\beta \lambda _k\le \alpha +\beta \lambda _1=0\). Hence, for any vector \(\mathbf{x}=\sum _{k=1}^nc_k\mathbf{v}_k\),
$$\begin{aligned} Q(\mathbf{x})=\frac{1}{2}\sum _{k=1}^n(\alpha +\beta \lambda _k)c_k^2\le 0. \end{aligned}$$
(6.7)
In other words, \(Q(\mathbf{x})\) is a negative semi-definite quadratic form on \({{\mathbb {R}}}^n\).
We denote the gradient of \(Q(\mathbf{x})\) by \(U(\mathbf{x})=\bigl (U_1(\mathbf{x}),\dots ,U_n(\mathbf{x})\bigr )\). Thus, by (6.6),
$$\begin{aligned} U(\mathbf{x}):=\nabla Q(\mathbf{x}) = (\alpha E+\beta A)\mathbf{x}. \end{aligned}$$
(6.8)
It follows from (6.6) and (6.8) that, for any \(\mathbf{x}\in {{\mathbb {R}}}^n\),
$$\begin{aligned} Q(\mathbf{x}\pm \mathbf{e}_i) =Q(\mathbf{x})+Q(\mathbf{e}_i)\pm \langle (\alpha E+\beta A)\mathbf{x},\mathbf{e}_i\rangle =Q(\mathbf{x})+\frac{\alpha }{2}\pm U_i(\mathbf{x}). \end{aligned}$$
(6.9)
Let \(a':=-\alpha /2>0\). Since \(W(\mathbf{x})=Q(\mathbf{x})+a'S(\mathbf{x})\) by (3.1), and \(S(\mathbf{x})\) is the linear function (3.2), it follows from (6.9) that
$$\begin{aligned} W(\mathbf{x}\pm \mathbf{e}_i)-W(\mathbf{x}) = Q(\mathbf{x}\pm \mathbf{e}_i)-Q(\mathbf{x}) \pm a'S(\mathbf{e}_i) =\pm U_i(\mathbf{x})-a'\pm a'. \end{aligned}$$
(6.10)
In particular, for \(\xi \in {\mathbb {Z}}_+^n\), the rate of increase of the i-th component is, by (1.2) and (3.3),
$$\begin{aligned} q_{\xi ,\xi +\mathbf{e}_i}= e^{W(\xi +\mathbf{e}_i)-W(\xi )}=e^{U_i(\xi )}. \end{aligned}$$
(6.11)
Fix \(\xi \in {\mathbb {Z}}_+^n\) and consider the DTMC \(\zeta (t)\) started at \(\zeta (0)=\xi \), and the stochastic process \(Q_{\zeta }(t):=Q(\zeta (t))\), \(t\in {\mathbb {Z}}_+\). Denote the change of \(Q_{\zeta }(t)\) in the first step by \(\varDelta Q_{\zeta }(0):=Q_{\zeta }(1)-Q_{\zeta }(0)\). Then the expected change of \(Q_{\zeta }(t)\) in the first step is, using (6.9), (5.1), (1.2) and (6.11),
$$\begin{aligned} {\mathbb {E}}\bigl (\varDelta Q_{\zeta }(0)\mid \zeta (0)=\xi \bigr )&=\sum _\eta p_{\xi ,\eta }\bigl (Q(\eta )-Q(\xi )\bigr ) \nonumber \\&=\sum _{i=1}^n\Bigl (p_{\xi ,\xi +\mathbf{e}_i} \bigl (U_i(\xi )-a'\bigr ) +p_{\xi ,\xi -\mathbf{e}_i} \bigl (-U_i(\xi )-a'\bigr ) \Bigr ) \nonumber \\&=q_\xi ^{-1}\sum _{i=1}^n\Bigl (e^{U_i(\xi )}U_i(\xi ) -{\varvec{1}}_{\xi _i>0} U_i(\xi )\Bigr )-a', \end{aligned}$$
(6.12)
where \({\varvec{1}}_{{\mathcal {E}}}\) denotes the indicator of an event \({\mathcal {E}}\). Furthermore, using (6.11) and the notation (6.4),
$$\begin{aligned} q_\xi :=\sum _{i=1}^n\bigl (q_{\xi ,\xi +\mathbf{e}_i}+q_{\xi ,\xi -\mathbf{e}_i}\bigr ) =\sum _{i=1}^n\bigl (e^{U_i(\xi )}+{\varvec{1}}_{\xi _i>0}\bigr ) \le \varPhi (U(\xi )). \end{aligned}$$
(6.13)
(With equality unless some \(\xi _i=0\).) Moreover, if \(\xi _i=0\), then (6.8) implies \(U_i(\xi )=\beta \sum _{j\sim i}\xi _j\ge 0\). Hence, (6.12) yields
$$\begin{aligned} \begin{aligned} {\mathbb {E}}\bigl (\varDelta Q_{\zeta }(0)\mid \zeta (0)=\xi \bigr )&\ge q_\xi ^{-1}\sum _{i=1}^n\Bigl (e^{U_i(\xi )}U_i(\xi ) -U_i(\xi )\Bigr )-a'\\&\ge \frac{\varPsi (U(\xi ))}{\varPhi (U(\xi ))}-a'. \end{aligned} \end{aligned}$$
(6.14)
Lemma 6.2 now implies the existence of a constant \({}C_{1}\) such that if \(||U(\xi )||\ge C_{1}\), then \({\mathbb {E}}\bigl (\varDelta Q_{\zeta }(0)\mid \zeta (0)=\xi \bigr )\ge 0\).
We have, as in (6.7), with \(\omega _k:=\alpha +\beta \lambda _k\le 0\), the eigenvalues of \(\alpha E+\beta A\),
$$\begin{aligned} Q(\mathbf{x})=\frac{1}{2}\sum _{k=1}^n\omega _k \langle \mathbf{x},\mathbf{v}_k\rangle ^2, \end{aligned}$$
(6.15)
and it follows that the gradient \(U(\mathbf{x})\) can be expressed as
$$\begin{aligned} U(\mathbf{x})=\sum _{k=1}^n\omega _k \langle \mathbf{x},\mathbf{v}_k\rangle \mathbf{v}_k, \end{aligned}$$
(6.16)
and thus
$$\begin{aligned} || U(\mathbf{x})||^2=\sum _{k=1}^n\omega _k^2 \langle \mathbf{x},\mathbf{v}_k\rangle ^2. \end{aligned}$$
(6.17)
Comparing (6.15) and (6.17), and recalling \(\omega _k\le 0\), we see that
$$\begin{aligned} 2 \min _k|w_k|\cdot |Q(\mathbf{x})| \le ||U(\mathbf{x})||^2 \le 2\max _k|w_k|\cdot |Q(\mathbf{x})|. \end{aligned}$$
(6.18)
Hence, the result above shows the existence of a constant \({}C_{2}\) such that
$$\begin{aligned} \text {If}\quad |Q(\xi )|\ge C_{2}, \quad \text {then}\quad {\mathbb {E}}\bigl (\varDelta Q_{\zeta }(0)\mid \zeta (0)=\xi \bigr )\ge 0. \end{aligned}$$
(6.19)
Fix \(m\ge 0\) and consider \(\zeta (t)\) for \(t\ge m\). Define the stopping time \(\tau _m:=\inf \{t\ge m:|Q_{\zeta }(t)|\le C_{2}\}\). Then (6.19) and the Markov property imply that the stopped process \(-Q_{\zeta }(t\wedge \tau _m)\), \(t\ge m\), is a positive supermartingale. (Recall that \(Q_{\zeta }(t)\le 0\) by (6.7).) Hence, this process converges a.s. to a finite limit. In particular, if we define the events \({\mathcal {E}}_m:=\{|Q_{\zeta }(t)|>C_{2}\text { for every }t\ge m\}\) and \({\mathcal {E}}':=\{|Q_{\zeta }(t)|\rightarrow \infty \text { as }t\rightarrow \infty \}\), then \({\mathbb {P}}({\mathcal {E}}_m\cap {\mathcal {E}}')=0\). Clearly, \({\mathcal {E}}':=\bigcup _{m=1}^\infty {\mathcal {E}}'\cap {\mathcal {E}}_m\). Consequently, \({\mathbb {P}}({\mathcal {E}}')=0\).

We have shown that a.s. \(|Q_{\zeta }(t)|=|Q(\zeta (t))|\) does not converge to \(\infty \). In other words, a.s. there exists a (random) constant M such that \(|Q(\zeta (t))|\le M\) infinitely often. By (6.18) and (6.13), there exists for each \(M<\infty \) a constant \({}C_{3}(M)<\infty \) such that \(|Q(\xi )|\le M\) implies \(q_\xi \le C_{3}(M)\). Consequently, a.s., \(q_{\zeta (t)}\le C_{3}(M)\) infinitely often, and thus \(\sum _{t=0}^\infty q_{\zeta (t)}^{-1}=\infty \), which implies that \(\xi (t)\) does not explode. \(\square \)

Remark 6.2

Note that if \(\alpha <0\), \(\beta >0\) and \(\alpha +\lambda _1\beta <0\), then the function Q defined in (6.6) is negative definite, so that \({{\tilde{Q}}}(\mathbf{x}):=-Q(\mathbf{x})\rightarrow \infty \) as \(||\mathbf{x}||\rightarrow \infty \). Therefore, it follows from equation (6.14) that the CTMC \(\xi (t)\) is positive recurrent by Foster’s criterion for positive recurrence (e.g. see [9, Theorem 2.6.4]). In other words, the function \({{\tilde{Q}}}\) can be used as the Lyapunov function in Foster’s criterion for showing positive recurrence of the DTMC \(\zeta (t)\), which in this case implies positive recurrence for the CTMC \(\xi (t)\) by (5.4). In fact, function \({{\tilde{Q}}}\) was used in Foster’s criterion to show positive recurrence of the DTMC \(\zeta (t)\) in the following special case \(\alpha <0\) and \(\alpha +\beta \max _i\nu _i<0\) in [13, Sect. 4.1.1].

Proof of Lemma 6.1

The non-explosive case (i)(a) follows from Theorem 2.1(i) when \(\alpha +\beta \lambda _1(G)<0\) (then the chain is positive recurrent), and from Lemma 6.2 when \(\alpha +\beta \lambda _1(G)=0\). The other non-explosive cases (i)(b) and (i)(c) are trivial because in these cases (1.2) implies \(q_{\xi ,\eta }\le 1\), and thus \(q_\xi \le 2n\) is bounded.

For explosion, we may assume that G is connected, since we otherwise may consider the components of G separately, see Remark 1.2. Then, [13, Theorem 1(3) and its proof] show that if \(\alpha +\beta \min _i \nu _i>0\) and \(\beta \ge 0\), then \(\xi (t)\) explodes a.s.; this includes the cases (ii)(b) and (ii)(c) above, and the case \(\alpha >0\), \(\beta \ge 0\). Furthermore, [13, Theorem 2] shows that if \(\alpha >0\) and \(\beta \le 0\), then \(\xi (t)\) a.s. explodes; together with the result just mentioned, this shows explosion when \(\alpha >0\). \(\square \)

Remark 6.3

It is shown in [13] that explosion may occur in several different ways, depending on both the parameters \(\alpha ,\beta \) and the graph G. For example, if G is a star, then there are (at least) three possibilities, each occuring with probability 1 when \((\alpha ,\beta )\) is in some region: a single component \(\xi _i\) explodes (tends to infinity in finite time); two adjacent components explode simultaneously; or all components explode simultaneously.

Furthermore, the results in [13] show that in the explosive cases in Theorem 6.1, the Markov chain asymptotically evolves as a pure birth process, in the sense that, with probability one, there is a random finite time after which none of the components decreases, i.e. there are no “death” events after this time. Consequently, the corresponding discrete time Markov chain can be regarded as a growth process on a graph similar to interacting urn models (e.g., see models in [1, 11] and [12]). One of the main problems in such growth processes is the same as in the urn models. Namely, it is of interest to understand how exactly the process escapes to infinity, i.e. whether all components grow indefinitely, or the growth localises in a particular subset of the underlying graph.

We do not discuss this sort of problems here and hope to address it elsewhere.

7 A Modified Model

In this section, we study the CTMC \({{\widetilde{\xi }}}(t)\) with the rates \({{\widetilde{q}}}_{\xi ,\eta }\) in (1.3), and the corresponding DTMC \({{\widetilde{\zeta }}}(t)\). This model is interesting since we have “decoupled” \(\alpha \) and \(\beta \), with birth rates depending on \(\alpha \) and death rates depending on \(\beta \).

Since \({{\widetilde{q}}}_{\xi ,\xi \pm \mathbf{e}_i}\) differ from \(q_{\xi ,\xi \pm \mathbf{e}_i}\) by the same factor \(e^{-\beta \sum _{j:j\sim i}\xi _j}\), which furthermore does not depend on \(\xi _i\), the balance equation (3.4) holds for \({{\widetilde{q}}}_{\xi ,\eta }\) too, and thus \({{\widetilde{\xi }}}(t)\) has the same invariant measure \(\mu (\xi )=e^{W(\xi )}\) as \(\xi (t)\).

The electric network \({{\widetilde{\varGamma }}}_{{\alpha , \beta , G}}\) corresponding to \({{\widetilde{\xi }}}(t)\) has conductances
$$\begin{aligned} {{\widetilde{C}}}_{\xi -\mathbf{e}_i, \xi }:=e^{W(\xi -\mathbf{e}_i)+\alpha (\xi _i-1)}=e^{W(\xi )-\beta (A\xi )_i}. \end{aligned}$$
(7.1)

Remark 7.1

If \(\beta >0\), then \({{\widetilde{C}}}_{\xi ,\eta }\le C_{\xi ,\eta }\), and if \(\beta <0\), then \({{\widetilde{C}}}_{\xi ,\eta }\ge C_{\xi ,\eta }\). (If \(\beta =0\), the two models are obviously identical.)

Theorem 7.1

The results in Theorem 2.1 hold for \({{\widetilde{\xi }}}(t)\) too, with a single exception: If \(e(G)=1\), \(\alpha <0\) and \(\alpha +\beta \lambda _1(G)=0\), then \({{\widetilde{\xi }}}(t)\) is null recurrent while \(\xi (t)\) is transient.

Here \(\lambda _1(G)\) is as above the largest eigenvalue of G. If \(e(G)=1\), then \(\lambda _1(G)=1\); thus the exceptional case is \(e(G)=1\), \(\alpha =-\beta <0\).

Proof

The lemmas in Sect. 4 all hold for \({{\widetilde{\xi }}}(t)\) too by the same proofs with no or minor modifications, except Lemma 4.2 in the case \(\alpha <0\), \(\alpha +\beta \lambda _1=0\); we omit the details. This exceptional case is treated in Lemmas 7.2 and 7.3 below. \(\square \)

A few cases alternatively follow by Remark 7.1 and the Rayleigh monotonicity law.

Before treating the exceptional case, we give a simple combinatorial lemma.

Lemma 7.1

Suppose that G is a connected graph with \(e(G)\ge 2\), and let as above \(\mathbf{v}_1=(v_{11},\dots ,v_{1n})\) be a positive eigenvector of A with eigenvalue \(\lambda _1\). Then, for each i,
$$\begin{aligned} v_{1i}<\sum _{j\ne i} v_{1j}. \end{aligned}$$
(7.2)

Proof

First, e.g. by (6.1), \(\lambda _1\ge 1\). Hence, for every i,
$$\begin{aligned} v_{1i}\le \lambda _1 v_{1i}=(A\mathbf{v}_1)_i=\sum _{j\sim i}v_{1j}\le \sum _{j\ne i} v_{1j}. \end{aligned}$$
(7.3)
If one of the inequalities in (7.3) is strict, then (7.2) holds. In the remaining case \(\lambda _1=1\), and every \(j\ne i\) is a neighbour of i. Consequently, if \(j\ne i\), then
$$\begin{aligned} v_{1j}=\lambda _1 v_{1j}=(A\mathbf{v}_1)_j =\sum _{k\sim j}v_{1k} \ge v_{1i}. \end{aligned}$$
(7.4)
By the assumption \(e(G)\ge 2\), G has at least 3 vertices, and thus (7.4) implies \(\sum _{j\ne i} v_{1j} \ge 2 v_{1i}>v_{1i}\), so (7.2) holds in this case too. \(\square \)

Lemma 7.2

If \(\alpha <0\), \(\alpha +\beta \lambda _1\ge 0\) and \(e(G)\ge 2\), then the CTMC \({{\widetilde{\xi }}}(t)\) is transient.

Proof

If G is connected, then \(\mathbf{v}_1\) satisfies (7.2) by Lemma 7.1. On the other hand, if G is disconnected and has a component with at least two edges, it suffices to consider that component.

In the remaining case, G consists only of isolated edges and vertices. There are at least two edges, which we w.l.o.g. may assume are 12 and 34. Then \(\lambda _1=1\) and \(\mathbf{v}_1:=\frac{1}{2}(\mathbf{e}_1+\mathbf{e}_2+\mathbf{e}_3+\mathbf{e}_4)=\frac{1}{2}(1,1,1,1,0,\dots )\) is an eigenvector satisfying (7.2).

Hence we may assume that \(\mathbf{v}_1\) satisfies (7.2). Hence there exists \(\delta >0\) such that for every \(i=1,\dots ,n\),
$$\begin{aligned} S(\mathbf{v}_1)= v_{1i}+\sum _{j\ne i} v_{1j} \ge 2v_{1i}+\delta . \end{aligned}$$
(7.5)
We follow the proof of Lemma 4.2, and note that there is equality in (4.4) and (4.5). Hence, for any i, again writing \(a':=-\alpha /2>0\), and using (4.3),
$$\begin{aligned} W(y_k)+\alpha y_{k,i}&=a'\bigl (S(y_k)-2y_{k,i}\bigr )+O(1) =a't_k \bigl (S(\mathbf{v}_1)-2v_{1,i}\bigr ) + O(1) \nonumber \\&\ge a'\delta t_k + O(1) \ge c' k + O(1) \end{aligned}$$
(7.6)
for some \(c>0\). Thus, the resistance of the edge connecting \(y_{k}\) and \(y_{k+1}\) is, for some i,
$$\begin{aligned} R_{k+1}={{\widetilde{C}}}_{y_k,y_k+\mathbf{e}_i}^{-1}= e^{-W(y_k)-\alpha y_{k,i}} \le e^{-c' k+O(1)}. \end{aligned}$$
(7.7)
Hence, \(\sum _{k=1}^\infty R_k <\infty \), and the network is transient by the same argument as before. \(\square \)

Lemma 7.3

If \(\alpha <0\), \(\alpha +\beta \lambda _1\ge 0\) and \(e(G)=1\), then the CTMC \({{\widetilde{\xi }}}(t)\) is null recurrent.

Proof

Suppose first that \(n=2\) so \(G={\mathsf {K}}_2\) consists of a single edge. Then (3.1) gives, with \(a':=-\alpha /2>0\) as above,
$$\begin{aligned} W(\xi _1,\xi _2)=-a'(\xi _1-\xi _2)^2+a'(\xi _1+\xi _2), \end{aligned}$$
(7.8)
and then (7.1) yields
$$\begin{aligned} {{\widetilde{C}}}_{\xi ,\xi +\mathbf{e}_1}= e^{W(\xi )+\alpha \xi _1} =e^{-a'(\xi _1-\xi _2)^2+a'(\xi _2-\xi _1)}\le 1, \end{aligned}$$
(7.9)
and similarly, \( {{\widetilde{C}}}_{\xi ,\xi +\mathbf{e}_1}\le 1\). Hence all conductances are bounded by 1, and thus all resistances are bounded below by 1. We may compare the network \({{\widetilde{\varGamma }}}_{{\alpha , \beta , G}}\) to the network \({\mathbb {Z}}_+^2\) with unit resitances, and obtain by Rayleigh’s monotonicity law \(R_\infty ({{\widetilde{\varGamma }}}_{{\alpha , \beta , G}})\ge R_\infty ({\mathbb {Z}}_+^2)=\infty \), recalling that simple random walk on \({\mathbb {Z}}_+^2\) is recurrent by Remark 1.1 and Example 3.1. Hence \({{\widetilde{\xi }}}(t)\) is recurrent.

The invariant measure \(e^{W(\xi )}\) is the same as for \(\xi (t)\) and has total mass \(Z_{{\alpha , \beta , G}}=\infty \) by Lemma 4.12; hence \({{\widetilde{\xi }}}(t)\) is not positive recurrent.

This completes the proof when G is connected. If G is disconnected, then G consist of one edge and one or several isolated vertices. By Remark 1.2, \(\xi (t)\) then consists of \(n-1\) independent parts: one part is the CTMC in \({\mathbb {Z}}_+^2\) defined by the graph \({\mathsf {K}}_2\), which is null recurrent by the first part of the proof; the other parts are independent copies of the CMTC in \({\mathbb {Z}}_+\) defined by a single vertex, and these are positive recurrent since \(\alpha <0\). It is now easy to see that the combined \(\xi (t)\) is null recurrent. \(\square \)

The corresponding DTMC \({{\widetilde{\zeta }}}(t)\) has invariant measure
$$\begin{aligned} {{\widetilde{C}}}_\xi :=\sum _\eta {{\widetilde{C}}}_{\xi ,\eta }. \end{aligned}$$
(7.10)
Note that this (in general) differs from the invariant measure \(C_\xi \) for \(\zeta (t)\), see (5.2). Denote the total mass of this invariant measure by
$$\begin{aligned} {{\widetilde{Z}}}_{{\alpha , \beta , G}}:=\sum _\xi {{\widetilde{C}}}_\xi =\sum _{\xi ,\eta } {{\widetilde{C}}}_{\xi ,\eta }. \end{aligned}$$
(7.11)
There is no obvious analogue of the relation (5.4), but we can nevertheless prove the following analogue of Lemma 5.1.

Lemma 7.4

Let \(-\infty<\alpha <\infty \) and \(-\infty \le \beta <\infty \). Then \({{\widetilde{Z}}}_{{\alpha , \beta , G}}<\infty \) if and only if \(\alpha <0\) and \(\alpha +\beta \lambda _1<0\).

Proof

By the proof of Lemma 4.12 with minor modifications. In particular, in the case \(\alpha <0\) and \(\alpha +\beta \lambda _1=0\), we argue also as in (7.5)–(7.7) in the proof of Lemma 7.2 (but now allowing \(\delta =0\)). We omit the details. \(\square \)

Theorem 7.2

Theorem 7.1 holds for the DTMC \({{\widetilde{\zeta }}}(t)\) too.

Proof

By Theorem 7.1 for recurrence vs transience, and by Theorem 7.2 for positive recurrence vs null recurrence. \(\square \)

We are not going to analyse the modified model any further.

8 Alternative Proofs Using Lyapunov Functions

In this section we give alternative proofs of some parts of Theorem 2.1. These proofs do not use reversibility, and have therefore potential extensions also to cases where electric networks are not applicable. They are based on the following recurrence criterion for countable Markov chains using Lyapunov functions, see e.g. [9, Theorem 7.2.1].

Recurrence criterion 8.1

A CTMC with values in \({\mathbb {Z}}_{+}^n\) is recurrent if and only if there exists a positive function f (the Lyapunov function) on \({\mathbb {Z}}_{+}^n\) such that \(f(\xi )\rightarrow \infty \) as \(\xi \rightarrow \infty \) and \(\mathsf{L}f(\xi )\le 0\) for all \(\xi \notin D\), where \(\mathsf{L}\) is the Markov chain generator, and D is a finite set.

Note that the Lyapunov function \(f(\xi )\) is far from unique. The idea of the method is to find some explicit function f for which the conditions can be verified. There is also a related criterion for transience [4, Theorem 2.2.2], but we will not use it here.

We give only some examples. (See also [13] for further examples.) It might be possible to give a complete proof of Theorem 2.1 using these methods, but this seems rather challenging. Note that (since our Markov chains have bounded steps), the Lyapunov function f can be changed arbitrarily on a finite set; hence it suffices to define \(f(\xi )\) (and verify its properties) for \(||\xi ||\) large. We do so, usually without comment, in the examples below.

Example 8.1

Proof of the hard-core case of Theorem 2.1(ii)(a) by the recurrence criterion 8.1. Assume that \(\alpha =0\), \(\beta =-\infty \) and \(k_{\max }(G)\)\(\le 2\). As said in Sect. 3.3, we may assume that the Markov chain lives on \(\varGamma _0\) defined in (3.8); since \(\kappa \le 2\), this implies that no more than two components of the process can be non-zero. Therefore, the Markov chain evolves as a simple random walk on a certain finite union of quadrants of \({\mathbb {Z}}_+^2\) and half-lines \({\mathbb {Z}}_+\) glued along the axes. Each of these random walks is null-recurrent, and, hence, the whole process should be null-recurrent as well. We provide a rigorous justification to this heuristic argument by using the recurrence criterion 8.1.

The generator \(\mathsf{L}\) of the Markov chain in the case \(\alpha =0\), \(\beta =-\infty \) is
$$\begin{aligned} \mathsf{L}f(\xi )=\sum _{i=1}^n \left( f\left( \xi +\mathbf{e}_i\right) -f(\xi )\right) {\varvec{1}}_{\{\xi : \xi _j=0, \, j\sim i\}} +\left( f\left( \xi -\mathbf{e}_{i}\right) -f(\xi )\right) {\varvec{1}}_{\{\xi _i>0\}}. \end{aligned}$$
We define a Lyapunov function on \({\mathbb {Z}}_{+}^n\) by
$$\begin{aligned} f(\xi ):= \log \left( ||\xi -\mathbf{e}||^2 -n+\frac{3}{2}\right) , \qquad ||\xi ||\ge C_1, \end{aligned}$$
(8.1)
where \(\mathbf{e}=(1,\dots ,1)\in {\mathbb {Z}}_{+}^n\) is the vector whose all coordinates are equal to 1, and \(C_1>0\) is sufficiently large so that the expression inside the log is greater than 1. Note that the function is defined for any state in \({\mathbb {Z}}_+^n\), but we consider it only on the subset \(\varGamma _0\). Let \(\xi \in \varGamma _0\) with \(||\xi ||>C_1+1\). First, assume that \(\xi \) has two non-zero components, say \(x>0\) and \(y>0\), so that
$$\begin{aligned} f(\xi )=\log \left( (x-1)^2+(y-1)^2-\frac{1}{2}\right) , \end{aligned}$$
and both x and y can increases as well as decrease. A direct computation gives that
$$\begin{aligned} \mathsf{L}f(\xi )&=\log \left( x^2+(y-1)^2-\frac{1}{2}\right) +\log \left( (x-1)^2+y^2-\frac{1}{2}\right) \\&\qquad +\log \left( (x-2)^2+(y-1)^2-\frac{1}{2}\right) +\log \left( (x-1)^2+(y-2)^2-\frac{1}{2}\right) \\&\qquad -4\log \left( (x-1)^2+(y-1)^2-\frac{1}{2}\right) \\&=\log \left( 1-\frac{64(x-y)^2(x+y-2)^2}{(2x^2+2y^2-4x-4y+3)^4} \right) \le 0. \end{aligned}$$
Next, assume that \(\xi \) has only one non-zero component, say \(x=a+1>0\). Then \(f(\xi )=\log \left( a^2+1/2\right) ,\) and this component can both increase and decrease. Note that some of the other components may also increase by 1, and assume there are \(m\ge 0\) such components. A direct computation gives that
$$\begin{aligned} \mathsf{L}f(\xi )&=\left[ \log \left( (a+1)^2+\frac{1}{2}\right) +\log \left( (a-1)^2+\frac{1}{2}\right) -2\log \left( a^2+\frac{1}{2}\right) \right] \\&+m\left[ \log \left( a^2-\frac{1}{2}\right) -\log \left( a^2+\frac{1}{2}\right) \right] \\&=\log \left( \frac{4a^4-4a^2+9}{4a^4+4a^2+1}\right) +m\left[ \log \left( \frac{2a^2-1}{2a^2+1}\right) \right] \le 0. \end{aligned}$$
Hence, \({\mathsf {L}}f(\xi )\le 0\) whenever \(\xi \in \varGamma _0\) with \(||\xi ||>C_1+1\). It follows now from the recurrence criterion 8.1 that CTMC \(\xi (t)\) is recurrent.
Now consider the case \(\alpha =0\) and \(-\infty<\beta <0\). The generator of the Markov chain with parameter \(\alpha =0\) is
$$\begin{aligned} {\mathsf {L}}f(\xi )=\sum _{i=1}^n \left( f\left( \xi +\mathbf{e}_i\right) -f(\xi )\right) e^{\beta (A\xi )_i} +\left( f\left( \xi -\mathbf{e}_{i}\right) -f(\xi )\right) {\varvec{1}}_{\{\xi _i>0\}}. \end{aligned}$$
(8.2)
We consider for simplicity only some small graphs G, using modifications of the Lyapunov function (8.1) used in the hard-core case.

Recurrence in the case \(\alpha =0\), \(b:=-\beta >0\) and \(G={\mathsf {K}}_2\), the graph with just 2 vertices and a single edge, was shown in [13] by applying the recurrence criterion 8.1 with the Lyapunov function \(f(\xi )=\log (\xi _1+\xi _2+1)\). Alternatively, one could use e.g. \(f(\xi )=\log (\xi _1+\xi _2)\) or \(\log (\xi _1^2+\xi _2^2)\). We extend this to the case \(G={\mathsf {K}}_n\), the complete graph with n vertices, for any \(n\ge 2\).

Example 8.2

Recurrence in the case\(\alpha =0\), \(\beta =-b<0\)and\(G={\mathsf {K}}_n\). We use the function \(f(\xi ):=\log \Vert \xi \Vert \). (Similar arguments work for variations such as \(\log \bigl (||\xi ||^2\pm 1\bigr )\) and \(\log (\xi _1+\dots +\xi _n)\).)

Regard \(f(\xi )\) as a function on \({{\mathbb {R}}}^n\setminus \{0\}\) and write \(r=||\xi ||\). The partial derivatives of f are
$$\begin{aligned} \frac{\partial f(\xi )}{\partial \xi _i}=\frac{\xi _i}{r^2} \end{aligned}$$
(8.3)
and all second derivatives are \(O\bigl (r^{-2}\bigr )\). Hence, a Taylor expansion of each of the differences in (8.2) yields the formula, for \(\xi \in {\mathbb {Z}}_+^n\),
$$\begin{aligned} \begin{aligned} {\mathsf {L}}f(\xi )&=\sum _{i=1}^n\frac{\xi _i}{r^2}\bigl (e^{-b(A\xi )_i}-{\varvec{1}}_{\{\xi _i>0\}}\bigr ) +O\bigl (r^{-2}\bigr )\\&=\sum _{i=1}^n\frac{\xi _i}{r^2}\bigl (e^{-b(A\xi )_i}-1\bigr ) +O\bigl (r^{-2}\bigr ). \end{aligned} \end{aligned}$$
(8.4)
Suppose first that at least 2 components \(\xi _i\) are positive. Then \((A\xi )_i=\sum _{j\ne i}\xi _j\ge 1\) for every i, and thus (8.4) implies, since \(r\le \sum _i\xi _i\),
$$\begin{aligned} {\mathsf {L}}f(\xi ) \le -\bigl (1-e^{-b}\bigr )\sum _{i=1}^n\frac{\xi _i}{r^2}+O\bigl (r^{-2}\bigr ) \le -\frac{1-e^{-b}}{r}+O\bigl (r^{-2}\bigr ), \end{aligned}$$
(8.5)
which is negative for large r, as required.
It remains to consider the case when a single component \(\xi _i\) is positive, say \(\xi =(x,0,\dots ,0)\) with \(x>0\). Then the estimate (8.4) is not good enough. Instead we find from (8.2)
$$\begin{aligned} {\mathsf {L}}f(\xi )&= \log (x+1)+\log (x-1)-2\log x+(n-1)e^{-bx}\bigl (\log \sqrt{x^2+1}-\log x\bigr ) \nonumber \\&= \log \frac{x^2-1}{x^2} +\frac{n-1}{2}e^{-bx}\log \frac{x^2+1}{x^2} \le -\frac{1}{x^2}+\frac{n-1}{2}e^{-bx}, \end{aligned}$$
(8.6)
which is negative when x is large.

Hence, in both cases, \({\mathsf {L}}f(\xi )\le 0\) when \(||\xi ||\) is large, and recurrence follows by the recurrence criterion 8.1.

The argument in Example 8.2 used the fact that G is a complete graph so that \((A\xi )_i\ge 1\) unless only \(\xi _i\) is non-zero. Similar arguments work for some other graphs.

Example 8.3

Let again \(\alpha =0\), \(\beta <0\) and let \(G={\mathsf {K}}_{1,2}\), a star with 2 non-central vertices, which is the same as a path of 3 vertices. Number the vertices with the central vertex as 3, and writre \(\xi =(x,y,z)\). Taylor expansions similar to the one in (8.4), but going further, show that \(f(\xi ):=\log ||\xi ||\) is not a Lyapunov function. (The problematic case is \(\xi =(x,x,0)\), with \({\mathsf {L}}f(\xi )=\frac{1}{2}r^{-4}+O(r^{-6})\).) However, similar calculation also show that \(f(\xi ):=\log (||\xi ||^2-1)\) is a Lyapunov function, showing recurrence by the recurrence criterion 8.1. We omit the details.

Notes

Acknowledgements

We thank James Norris for helpful comments.

References

  1. 1.
    Costa, M., Menshikov, M., Shcherbakov, V., Vachkovskaia, M.: Localisation in a growth model with interaction. J. Stat. Phys. 171(6), 1150–1175 (2018)ADSMathSciNetCrossRefzbMATHGoogle Scholar
  2. 2.
    Doyle, P.G., Snell, J.L.: Random Walks and Electrical Networks. Mathematical Association of America, Washington, DC (1984)zbMATHGoogle Scholar
  3. 3.
    Erdős, P., Taylor, S.J.: Some problems concerning the structure of random walk paths. Acta Math. Acad. Sci. Hungar. 11, 137–162 (1960)MathSciNetCrossRefzbMATHGoogle Scholar
  4. 4.
    Fayolle, G., Malyshev, V., Menshikov, M.: Topics in the constructive theory of countable Markov chains. Cambridge University Press, Cambridge (1995)CrossRefzbMATHGoogle Scholar
  5. 5.
    Karlin, S., Taylor, H.: A first course in stochastic processes, 2nd edn. Academic Press Inc., Cambridge (1975)zbMATHGoogle Scholar
  6. 6.
    Kelly, F.: Reversibility and Stochastic Networks. Wiley Series in Probability and Mathematical Statistics. Wiley, New York (1979)zbMATHGoogle Scholar
  7. 7.
    Liggett, T.: Continuous time Markov Processes : An Introduction. Graduate Studies in Mathematics, American Mathematical Society, Providence (2010)CrossRefzbMATHGoogle Scholar
  8. 8.
    Lyons, R., Peres, Y.: Probability on Trees and Electrical Networks. Cambridge University Press, Cambridge (2016)CrossRefzbMATHGoogle Scholar
  9. 9.
    Menshikov, M.V., Popov, S., Wade, A.R.: Non-homogeneous Random Walks: Lyapunov Function Methods for Near-Critical Stochastic Systems. Cambridge University Press, Cambridge (2017)CrossRefzbMATHGoogle Scholar
  10. 10.
    Norris, J.: Markov Chains. Cambridge University Press, Cambridge (1997)CrossRefzbMATHGoogle Scholar
  11. 11.
    Shcherbakov, V., Volkov, S.: Stability of a growth process generated by monomer filling with nearest neighbour cooperative effects. Stoch. Process. Appl. 120, 926–948 (2010)MathSciNetCrossRefzbMATHGoogle Scholar
  12. 12.
    Shcherbakov, V. and Volkov, S.: Queueing with neighbours. In: Bingham, N.H., Goldie, C.M. (eds.) Probability and Mathematical Genetics. Papers in honour of Sir John Kingman. LMS Lecture Notes Series, 378, 463–481 (2010)Google Scholar
  13. 13.
    Shcherbakov, V., Volkov, S.: Long term behaviour of locally interacting birth-and-death processes. J. Stat. Phys. 158(1), 132–157 (2015)ADSMathSciNetCrossRefzbMATHGoogle Scholar
  14. 14.
    Volkov, S.: Vertex-reinforced random walk on arbitrary graphs. Ann. Probab. 29, 66–91 (2001)MathSciNetCrossRefzbMATHGoogle Scholar

Copyright information

© The Author(s) 2019

OpenAccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors and Affiliations

  • Svante Janson
    • 1
  • Vadim Shcherbakov
    • 2
    Email author
  • Stanislav Volkov
    • 3
  1. 1.Uppsala UniversityUppsalaSweden
  2. 2.Royal Holloway University of LondonSurreyUK
  3. 3.Lund UniversityLundSweden

Personalised recommendations