Keywords

1 Introduction

In this paper, we study persistence (\(\Diamond \Box (\cdot )\)) and recurrence (\(\Box \Diamond (\cdot )\)) properties for stochastic systems. Stochastic systems are commonly used to model the effect of noise or random uncertainties on systems. Examples include probabilistic programs [17, 20] with random number generating constructs for modeling uncertainties, cyber-physical systems under the influence of external stochastic disturbances, financial process models, and biological models. Given a stochastic system, we attempt to find proofs that for a subset of states T, the behaviors of the system satisfy a persistence property \(\Diamond \Box (T)\) with probability 1 (almost surely), i.e., almost every behavior of the system eventually enters T, and stays in T, forever. Similarly, we present an approach to prove \(\Box \Diamond (T)\), i.e., almost every behavior of the system hits T infinitely often. Such persistence properties effectively prove facts about the asymptotic behavior of these processes which may take many forms, including convergence towards an “equilibrium” region, or asymptotic divergence away from a particular set. Recurrence properties can be used, for instance, to show that the system keeps returning to a set of desirable configurations even if forced to leave under the influence of stochastic disturbances.

Persistence and recurrence properties are also of independent interest to the verification and control theory community [5, 21]. In standard model checking approaches, we rely on showing that the system is forced to almost surely reach a strongly connected component [11, 13] which is a subset of T, and additionally for proving \(\Diamond \Box (T)\), that it has no outgoing transition from it. However, Baier et al. have demonstrated that this technique is restricted to finite state stochastic transition systems [5].

In comparison, the technique we propose here can handle infinite state, discrete time, polynomial stochastic systems by automatically deriving supermartingale expressions over the system variables and leveraging properties of these supermartingale expressions. Specifically, our work extends the infinite state probabilistic transition systems used in our earlier work [8], or the probabilistic guarded command language proposed by McIver and Morgan [23] with polynomial guards and updates. However, the ideas we present here can extend to a larger class of Markov models.

In this paper, we introduce two types of proof rule arguments for proving persistence and recurrence that derive directly from classic rules for Markov chains such as the Foster-Lyapunov condition [16, 24]: (a) “Geometric” rules involve finding a nonnegative function \(V(\mathbf x )\) over the state variables \(\mathbf x \), whose expectation in the next time step is some multiplicative factor \(\alpha \in (0,1)\) of its current value \(V(\mathbf x )\). These are inspired by Lyapunov functions used in control theory to prove exponential stability. (b) “Additive” rules are analogous to ranking function arguments for (nondeterministic) program termination. These conditions were studied for program termination as “supermartingale ranking functions” (SMRFs) in our previous work [8] and proven complete for certain classes of probabilistic systems by Fioriti et al. [15] and more recently by Chatterjee et al. [10]. However, SMRFs are designed to prove almost-sure termination properties of the form \(\Diamond (T)\). In the current work, we show—rather counterintuitively—that SMRFs cannot in general prove \(\Diamond \Box (T)\) properties. We provide a suitable technical condition (of bounded increase) under which SMRFs can prove \(\Diamond \Box (T)\) properties. Next, we also show that both types of proofs are equivalent under some technical conditions: a proof using a geometric proof rule can be transformed into an equivalent proof using an additive rule, and vice-versa. Nevertheless, both forms are useful when searching for certificates of a given form such as a polynomial function over the state-variables.

Finally, we examine the problem of automatically synthesizing the functions \(V(\mathbf x )\) for polynomial, probabilistic transition systems to prove persistence and recurrence properties. Assuming a parameterized “template” form of this function, we derive conditions that must be satisfied over the parameters for proving a target property. We conclude by illustrating our approach on a variety of small, but interesting examples.

1.1 Motivating Example 1: Room Temperature Control

In [3] Abate et al. present a room temperature control problem subject to stochastic temperature perturbations. Suppose that there are two adjacent rooms, whose temperatures change according to the following stochastic difference equation:

$$\textstyle x'_i := x_i + b_i(x_0 - x_i) +a\cdot \sum \limits _{i\ne j}(x_j - x_i) +c_i\left( 1 - \sigma \left( x_i\right) \right) + \nu _i,\text { for }i \in \{1, 2\}, $$

where \(a = 0.0625\), \(b_1 = 0.0375\), \(b_2 = 0.025\) are respectively the inter-room and external heat convection constants, \(x_0=6^\circ C\) is the outdoor temperature, \(c_1 = 0.65\), \(c_2 = 0.6\) are the heat units supplied to the two rooms by the heater, and \(\nu _1,\nu _2\) are i.i.d. stochastic noise. The behavior of the heater is governed by the controller unit term \(\sigma \). We focus on the evolution of the room temperatures within the range \([6, 33]^2\).

Fig. 1.
figure 1

100 simulations of the two-room controller system, with initial temperatures \(x_1\), \(x_2\) uniformly drawn from \([15, 22]^2\), under two different types of stochastic noise: (left) \(\nu _i \sim \mathcal {U}(-0.01,0.01)\), with the red horizontal lines indicating the intervals [17.8, 18.7] (for room 1) and [18.4, 19.3] (for room 2); (right) \(\nu _i \sim \mathcal {N}(0,0.25)\), with the red horizontal lines indicating the intervals [16.9, 19.6] (for room 1) and [17.3, 20.2] (for room 2).

Abate et al. construct a (nonlinear) sigmoidal controller \(\sigma (t)\) that keeps the temperatures within a comfortable range \(S : [17, 22]\times [16, 23]\) and focus on bounding the probability that the system leaves S (i.e., stochastic safety) within finitely many steps under the influence of Gaussian noise. Figure 1 shows 100 sample executions of the system when the controller is approximated with a degree-7 polynomial:

$$\begin{aligned} \sigma (t) : 29.2 - 13.42t + 2.55t^2 - 0.26t^3 + 0.015t^4 - 5.13\times 10^{\text{- }4}t^5 + 9.23\times 10^{\text{- }6}t^t - 6.87\times 10^{\text{- }8}t^7 \end{aligned}$$

under two different types of random noise: uniform \(\mathcal {U}\) on a given range and normal \(\mathcal {N}\).

Controller \(\sigma \) was originally designed to keep the system in S with finite-time (100 min) guarantees in mind. We prove that under mild stochastic disturbances (left) the system satisfies the almost sure persistence property \(\Diamond \Box S\), i.e., with probability 1 the system eventually enters S and stays there forever. This is demonstrated by the proof rule persist-geom of Sect. 3.1 and the certificate \(V(x_1,x_2) : (x_1 - 18.3)^2 + (x_2 - 18.8)^2\). When the level of stochastic disturbance is increased (right) the almost sure persistence property no longer holds. This is consistent with the results in [3]; however, a weaker, almost sure recurrence property: \(\Box \Diamond (16.9 \le x_1 \le 19.6 \wedge 17.3 \le x_2 \le 20.2)\) holds, i.e., with probability 1 the system visits the region infinitely often. This is demonstrated by proof rule rec of Sect. 3.2 using the same certificate function V.

Fig. 2.
figure 2

A nonlinear Markov jump system with 2 modes.

1.2 Motivating Example 2: Nonlinear Markov Jump System

Figure 2 shows a nonlinear Markov jump system with two modes \(q_1, q_2\) and two state variables \(\mathbf x : (x,y)\) that evolve according to the mode-dependent difference equations. The system jumps between modes with equal probability.

Observe that 0 is an equilibrium and \(X : [-0.5,0.5]^2\) is an invariant of the system, i.e., all sample paths starting in X, stay in the set forever. Figure 3(a) shows the sample paths that start inside X converge towards 0. We establish that the persistence property \( \Diamond \Box ( |x| \le 0.1 \wedge |y| \le 0.1)\) holds almost surely over all executions of the system, by synthesizing the nonnegative certificate function \(V(\mathbf x ) : 2.3 x^2 + 4.15 xy + 3.7 y^2\). After one time step, the expected value of V is at most \(\tfrac{1}{2}\)-fraction of its original value, i.e., \((\forall \mathbf x \in X)\ \mathbb {E}( V(\mathbf x ') | \mathbf x ) \le \tfrac{1}{2} V(\mathbf x )\). Figure 3(b) plots the function V over the sample paths, showing its convergence. We use the certificate \(V(\mathbf x )\) in Persist-Geom (Sect. 3.1) to establish the required property.

Outside X, the system appears unstable as shown in Fig. 3(c), yet the behaviors approach \(x=y\) asymptotically. Using the certificate \(\hat{V}(\mathbf x ) : (x-y)^2\) in spersist-geom (Strong Persistence of Sect. 3.1) we can prove that \((\forall \ \varepsilon > 0)\ \Diamond \Box (|x - y| \le \varepsilon )\).

Fig. 3.
figure 3

Sample paths of the Markov jump system described in Fig. 2.

Organization. Section 2 presents an infinite state discrete time stochastic system model and formally states the problem of proving persistence and recurrence (Sect. 2.2). Section 3 presents our main contribution in the form of proof rules for persistence and recurrence properties with the soundness results of our analysis presented in Sect. 4. Section 5 presents the results of our prototype implementation on a set of benchmarks, followed by a summary of relevant related work (Sect. 6) and conclusion (Sect. 7).

2 Preliminaries

In this section, we present the basic computational model of discrete-time stochastic dynamical systems that we study here and introduce the notion of tail invariant properties in the form of persistence and recurrence. We formulate a problem statement for the rest of the paper and finally introduce supermartingales. The proofs of all statements can be found in the extended version of the paper.

2.1 Discrete Time Stochastic Systems

We present a simple yet general model of an infinite-state discrete time stochastic system, and then examine a model of polynomial stochastic transition systems. A purely deterministic system with state-space X is described by a rule of the form \(\mathbf x ' := F(\mathbf x )\) and \(\mathbf x _0 \in X_0\), where \(\mathbf x \in X\) and \({\mathbf {x}}_0\) is the initial state belonging to initial set \(X_0\). Stochastic systems studied here are described by \(\mathbf x ' := F(\mathbf x ,\mathbf r )\) and \(\mathbf x _0 \sim \mathcal {D}_0\), where \({\mathbf {r}}\) is a vector of random variables and initial state \({\mathbf {x}}_0\) is drawn from initial distribution \(\mathcal {D}_0\).

Definition 1

(DTSS). A discrete-time stochastic system (DTSS) \(\varPi \) is defined as the tuple \(\left\langle \varSigma , \mathcal {R}, \mathcal {F}, \mathcal {D}_0 \right\rangle \) with the following components:

  1. 1.

    a state space \(\varSigma \) and an associated Borel \(\sigma \)-algebra on it,

  2. 2.

    a probability space \(\mathcal {R}: \left\langle R,\mathcal {F}_R, P \right\rangle \) (with individual samples denoted by \({\mathbf {r}}\)),

  3. 3.

    a transition function \(\mathcal {F}: \varSigma \times R \rightarrow \varSigma \), wherein \(\mathcal {F}({\mathbf {x}},{\mathbf {r}})\) denotes the next state obtained from a state \({\mathbf {x}} \in \varSigma \) and random sample \({\mathbf {r}} \in R\),

  4. 4.

    an initial probability distribution \(\mathcal {D}_0\) over \(\varSigma \).

Let \(\Omega \) denote the sample set \(\varSigma \times R^{\omega }\), which consists of tuples \(\left\langle \mathbf x _0, \mathbf r _0, \mathbf r _1, \cdots , \mathbf r _n, \cdots \right\rangle \). Here \(\mathbf x _0 \in \varSigma \) denotes the starting state sample, and \(\mathbf r _0, \mathbf r _1, \ldots , \mathbf r _n, \dots \) denote successive draws of random variables from the sample set R of the probability space \(\mathcal {R}\). Given a discrete-time Markov process, the model maps each \(\omega : \left\langle \mathbf x _0,\mathbf r _0, \ldots \right\rangle \in \Omega \) to a sample path (or trace) \(\pi (\omega )\) as follows .

The stochastic process is defined by applying the Kolmogorov extension theorem starting from finite-dimensional distributions \(P_{t_1,\ldots ,t_k}(B)\) where \(B \subseteq \varSigma ^k\) is a measurable set. The semantics of the stochastic system is, therefore, equivalent to an infinite-state discrete-time Markov chain (see the extended version of the paper for details).

Independence of Samples. The formulation above naturally assumes that the samples of the random variable \(\mathbf r _i\) are independent of the current state \(\mathbf x _i\) and from previous samples \(\mathbf r _0, \ldots , \mathbf r _{i-1}\). While this is somewhat restrictive, in practice it encompasses nearly all example instances that we are aware of.

No Demonic Nondeterminism. The formulation also precludes any demonic nondeterminism since \(\mathcal {F}\) is a function.

In this work, we focus on polynomial (stochastic) systems, which are instances of stochastic systems with piecewise polynomial update maps.

Definition 2

(Polynomial Stochastic System). A polynomial stochastic system \(\varPi \) is a tuple \(\left\langle X, \mathcal {R}, \mathcal {T}, \mathcal {D}_0 \right\rangle \), where (a) the state-space \(X \subseteq \mathbb {R}^n\) is a semi-algebraic set (i.e., X is the solution set of finitely many polynomial inequalities), (b) \(\mathcal {R}\) is a probability space for the stochastic inputs written collectively as \({\mathbf {r}}: ({\mathbf {r}}_c,{\mathbf {r}}_b)\), wherein \({\mathbf {r}}_c\) denotes the (possibly multivariate) continuous random variable and \({\mathbf {r}}_b\) denote the (discrete) random variables that take on finitely many values, (c) \(\mathcal {T}: \{ \tau _1, \ldots , \tau _m\} \) is a finite set of transitions, and (d) \(\mathcal {D}_0\) is an initial state probability distribution over X.

Each transition \(\tau \in \mathcal {T}\) has two parts: a guard predicate \(\varphi _{\tau }\) and an update function \(f_{\tau }: X \times R \rightarrow X\):

  1. 1.

    The guard \(\varphi _{\tau }({\mathbf {x}})\) is a conjunction of polynomial inequalities over \({\mathbf {x}}\);

  2. 2.

    The update function \(f_{\tau }({\mathbf {x}},{\mathbf {r}}): X\times R \rightarrow X\) is a piecewise polynomial function of the form:

    $$ f_{\tau }({\mathbf {x}},{\mathbf {r}}) : \left\{ \begin{array}{rl} g_{\tau ,1}({\mathbf {x}},{\mathbf {r}}_c), &{} \text { if }\ \psi _{\tau ,1}({\mathbf {r}}_b) \\ \vdots &{} \\ g_{\tau ,j}({\mathbf {x}},{\mathbf {r}}_c), &{} \text { if }\ \psi _{\tau ,j}({\mathbf {r}}_b)\, ,\\ \end{array} \right. $$

    where \(g_{\tau ,1}, \ldots , g_{\tau ,j}\) are multivariate polynomials over \({\mathbf {x}},{\mathbf {r}}_c\) and \(\psi _{\tau ,1}({\mathbf {r}}_b), \ldots \), \(\psi _{\tau ,j}({\mathbf {r}}_b)\) represent mutually exclusive and exhaustive predicates over the random variables with probability \(p_{\tau ,i}:\ \mathrm {Prob}(\psi _{\tau ,i}({\mathbf {r}}_b))\).

We refer to each function \(g_{\tau ,i}\) as a fork of \(f_\tau \) guarded by \(\psi _{\tau ,i}\) with corresponding fork probability \(p_{\tau ,i}\), for all i, \(1 \le i \le j\).

For a polynomial system to represent a stochastic system over X according to Definition 1, we require that transitions together form a function over the state-space X:

  1. 1.

    The guards are pairwise mutually exclusive: \(\varphi _{\tau _{i}} \wedge \varphi _{\tau _{j}}\) is unsatisfiable for all \( i \not = j\).

  2. 2.

    The guards are mutually exhaustive: \(\bigvee _{j=1}^k \varphi _{\tau _{j}} \equiv true \).

With these conditions, it is easy to define an overall piecewise polynomial transition function over \(\mathcal {F}\) that casts any polynomial transition system as a stochastic system.

Example 1

(Strange Random Walk). Let \(\{Y_i\}\) be a sequence of random variables over \(\mathbb {R}\) with \(Y_0\) distributed uniformly over [0, 1]. For all \(n \ge 0\), define:

$$Y_{n+1} = \left\{ \begin{array}{ll} Y_n^2, &{}\text { with probability }\frac{1}{2},\\ 2Y_n - Y_n^2, &{}\text { with probability }\frac{1}{2}. \end{array}\right. $$

The corresponding polynomial stochastic system is \(\varPi : \langle \mathbb {R}, \mathcal {R}, \{\tau \}, \mathcal {D}_0\rangle \), where \(\mathcal {R}\) is the probability space for the uniform distribution \(\mathcal {U}(0, 1)\), the initial probability distribution \(\mathcal {D}_0\) is \(\mathcal {U}(0,1)\), and transition \(\tau : \langle true, f_{\tau }\rangle \), has update mapping

$$f_\tau (\mathbf x , \mathbf r ) : \left\{ \begin{array}{ll} g_{\tau ,1}(\mathbf x ):\ \mathbf x ^2 , &{}\text { if }\psi _{\tau ,1}(\mathbf r _b) : \mathbf r _b \le 1/2,\\ g_{\tau ,2}(\mathbf x ):\ 2\mathbf x - \mathbf x ^2, &{}\text { if }\psi _{\tau ,2}(\mathbf r _b) : \mathbf r _b > 1/2, \end{array}\right. $$

and defines corresponding fork probabilities \(p_1 = p_2 = 1/2\).

Pre-Expectations. Key to the analysis is the notion of pre-expectation. The definitions below are inspired by [8, 19, 23] and are related to drift operators of Markov processes [24]. We first formalize the notion of pre-expectations over general stochastic systems and then provide a specialized definition for polynomial stochastic systems.

Consider a stochastic system \(\left\langle \varSigma ,\mathcal {R}, \mathcal {F}, \mathcal {D}_0 \right\rangle \), and a function \(h: \varSigma \rightarrow \mathbb {R}\) over the state-space. The pre-expectation of h w.r.t to \(\mathcal {F}\) yields another function \(\hat{h}: \varSigma \rightarrow \mathbb {R}\) such that for any state \(\mathbf x \in \varSigma \), \(\hat{h}(\mathbf x )\) yields the expected value of \(h(\mathbf x ')\), where the expectation is taken over all states \(\mathbf x '\) reached in one step from \(\mathbf x \). Formally, \(\hat{h}(\mathbf x ):\ \mathbb {E}_R\left( h (\mathcal {F}(\mathbf x ,\mathbf r ) ) \right) \). The pre-expectation can be difficult to compute for a stochastic system, even if \(h(\mathbf x )\) is of a simple form, for example, polynomial.

Now, we translate this definition to polynomial stochastic transition systems. We first define pre-expectations across transitions.

Definition 3

(Pre-Expectation across a Transition). Given a polynomial stochastic transition system \(\left\langle X ,\mathcal {R},\mathcal {T}, \mathcal {D}_0 \right\rangle \), a function \(h: X \rightarrow \mathbb {R}\) and a transition \(\tau \in \mathcal {T}\) with forks \(g_{\tau ,1}, \ldots , g_{\tau ,j}\) and corresponding fork probabilities \(p_{\tau ,1}, \ldots , p_{\tau ,j}\), the pre-expectation of h across transition \(\tau \) is a function \(\mathtt {pre}\mathbb {E}(h,\tau ): X\rightarrow \mathbb {R}\) defined as follows: \(\forall \mathbf x \in X\), \(\mathtt {pre}\mathbb {E}(h,\tau )(\mathbf x ) : \mathbb {E}_R[ h\left( f_{\tau }(\mathbf x ,\mathbf r _c) \right) | \mathbf x ] =\ \sum _{i=1}^j p_{\tau ,i} \mathbb {E}_{Rc}[h \left( g_{\tau ,i} (\mathbf x ,\mathbf r _c) \right) ] \), with expectation taken over R from which the random choices \(\mathbf r = (\mathbf r _b,\mathbf r _c)\) are drawn.

We now define the pre-expectation transformation over an entire stochastic system.

Definition 4

The pre-expectation of a function \(h:X \rightarrow \mathbb {R}\) w.r.t. a polynomial stochastic system \(\varPi :\left\langle X ,\mathcal {R}, \{\tau _1, ..., \tau _n\}, \mathcal {D}_0 \right\rangle \) is a function \(\mathtt {pre}\mathbb {E}(h,\varPi ):X\rightarrow \mathbb {R}\) defined by

where \(\varphi _i\) is the guard of transition \(\tau _i\) and is the indicator function of predicate \(\varphi \).

Related to the pre-expectation is the notion of a drift operator.

Definition 5

(Drift Operator). Let \(\varPi \) be a stochastic transition system and h be a function over the state-space. The drift of h w.r.t. \(\varPi \) is the function \(\mathcal {D}_{\varPi } h :\ \mathtt {pre}\mathbb {E}(h,\varPi ) - h\).

Wherever the system \(\varPi \) is clear from the context, we use \(\mathcal {D}h\) to denote the drift operator \(\mathcal {D}_{\varPi }\) applied to the function h.

Example 2

We return to Example 1 and compute the pre-expectation of \({h(x) = x}\):

It is clear that for any state \(x \in X\), the value of h equals the value of \(\mathtt {pre}\mathbb {E}(h,\varPi )\), or equivalently, the drift \(\mathcal {D}h = 0\). This function is an example of a martingale expression and such functions are central to our analysis. We give their definition in Sect. 2.3 and present properties of martingale expressions relevant to our analysis.

Assume that for any polynomial \(p(\mathbf r _c)\) involving the continuous random variables \(\mathbf r _c\) the expectation \(\mathbb {E}(p(\mathbf r _c))\) is finite. Then the drift of a polynomial is a polynomial.

Lemma 1

Assume that all cross moments exist for the random variable \({\mathbf {r}}_c\). For a polynomial \(h({\mathbf {x}})\), the pre-expectation \(\mathtt {pre}\mathbb {E}(h,\tau )\) across a transition \(\tau \) is also a polynomial. Moreover, the pre-expectation \(\mathtt {pre}\mathbb {E}(h,\varPi )\) is a piecewise polynomial function of the form: , where \(\varphi _j\) is a transition guard and \(q_j({\mathbf {x}})\) is a polynomial.

2.2 Persistence and Recurrence

Let \(\varPi \) be a polynomial stochastic system with a sample set \(\Omega : X \times R^{\omega }\) and an associated \(\sigma \)-algebra generated by the Borel sets over X and R. Let \(\Pr \) be the associated measure that maps a measurable subset \(S \subseteq \Omega \) to its probability \(\Pr (S)\). Let \(\pi \) be a function that maps each sample \(\omega \in \Omega \) to the corresponding sample path of the system \(\pi (\omega ): \left\langle \mathbf x _0 , \mathbf x _1, \ldots , \mathbf x _m, \ldots \right\rangle \). Likewise, let \(\pi _m\) map each sample \(\omega \in \Omega \) to the state encountered at time m, i.e., \( \pi _m(\omega ):\mathbf x _m\).

For a predicate \(\varphi \) over the system states, the persistence property \(\Diamond \Box \varphi \) is a collection of sample paths: \(\llbracket \Diamond \Box \varphi \rrbracket : \left\{ \omega \in \Omega \ |\ \exists n \ge 0, \forall m \ge n, \pi _m(\omega ) \models \varphi \right\} \). It is easy to show that this is a measurable set. The probability of the persistence property \({\Diamond \Box \varphi }\) is denoted \(\Pr (\Diamond \Box \varphi )\). We say the persistence property \({\Diamond \Box \varphi }\) holds almost surely (a.s.) iff \(\Pr (\Diamond \Box \varphi ) = 1\). Such a property is also known as qualitative (probability 1) persistence property and can be stated in PCTL as \(\mathbb {P}_{=1}(\Diamond \Box \ \varphi \)).

Similarly, a recurrence property \(\Box \Diamond \varphi \) is a collection of sample paths: \(\llbracket \Box \Diamond \varphi \rrbracket \) : \(\left\{ \omega \in \Omega \ |\ \forall n \ge 0,\ \exists \ m \ge n, \pi _m(\omega ) \models \varphi \right\} \). We say that the recurrence property \(\Box \Diamond \varphi \) holds almost surely iff \(\Pr ( \Box \Diamond \varphi ) = 1\).

Problem Statement. Let \(\varPi \) be a polynomial stochastic system with state-space X and let \(T \subseteq X\) be a measurable set of states. In this paper we are interested in two related problems: (i) Establish that the persistence property \(\Diamond \Box (T)\) holds a.s.; and/or, (ii) Establish that the recurrence property \(\Box \Diamond (T)\) holds a.s.

2.3 Supermartingales and Their Properties

Following [32] we recall the notion of supermartingales and some key properties.

Definition 6

A discrete time real-valued stochastic process \(\mathcal {M} = \{M_i\}_{i=0}^{\infty }\) is a supermartingale if \(\mathbb {E}(M_{n+1} |\ M_n = m_n, \ldots , M_{0} = m_0) \le m_n\), for all \(n \ge 0\) and \(m_n\).

Since our work mostly concerns Markov processes, we will write \(\mathbb {E}(M_{n+1} | M_n = m_n)\) to mean \(\mathbb {E}(M_{n+1} |\) \(M_n = m_n,\ldots , M_{0} = m_0)\). We now propose the key definitions of additive and multiplicative supermartingales that will be used in our work.

Definition 7

(Additive and Multiplicative Supermartingales). A supermartingale \(\mathcal {M}=\{M_i\}_{i=0}^{\infty }\) is called (\(\varepsilon \)-)additive iff \(\mathbb {E}(M_{n+1}|M_n = m_n) \le m_n - \varepsilon \), for all \(n \ge 0\) and \(m_n\), for some \(\varepsilon > 0\). The supermartingale \(\mathcal {M}\) is called (\(\alpha \)-)multiplicative iff \(\mathbb {E}(M_{n+1}|M_n = m_n) \le \alpha m_n\), for all \(n \ge 0\), \(m_n \ge 0\), and for some \(0< \alpha < 1\).

First we state the following simple result about \(\alpha \)-multiplicative supermartingales.

Lemma 2

Let \(\mathcal {M} = \{M_i\}_{i=0}^{\infty }\) be a nonnegative \(\alpha \)-multiplicative supermartingale for some \(\alpha \in (0,1)\). Then \(\widehat{\mathcal {M}} = \{\widehat{M}_i : \frac{M_i}{\alpha ^i}\}_{i=0}^{\infty }\) is a nonnegative supermartingale.

Following [8], we relate supermartingales to polynomial stochastic systems.

Definition 8

(Supermartingale Expressions). Let e be an expression over the state variables of a polynomial transition system \(\varPi \) (i.e., e is a real-valued function over the state-space X of \(\varPi \)). The expression e is a supermartingale expression for \(\varPi \) iff

$$ (\forall {\mathbf {x}} \in X)\ \mathtt {pre}\mathbb {E}(e, \varPi ) \le e({\mathbf {x}}),\text { or equivalently, } (\forall {\mathbf {x}} \in X)\ \mathcal {D}e({\mathbf {x}}) \le 0. $$

The expression e is called (c-)additive or alternatively, a supermartingale ranking function (SMRF) iff there exists \(c>0\) such that

$$ (\forall {\mathbf {x}} \in X)\ \mathtt {pre}\mathbb {E}(e, \varPi ) \le e({\mathbf {x}}) - c, \text { or equivalently, } (\forall {\mathbf {x}} \in X)\ \mathcal {D}e({\mathbf {x}}) \le -c. $$

Similarly, e is an (\(\alpha \)-)multiplicative supermartingale expression iff

$$(\exists \alpha \in (0,1)) (\forall {\mathbf {x}} \in X)\ \mathtt {pre}\mathbb {E}(e, \varPi ) \le \alpha e({\mathbf {x}}) .$$

By definition any (\(\alpha \)-)multiplicative supermartingale expression of \(\varPi \) induces a(n) (\(\alpha \)-)multiplicative supermartingale when evaluated along the sample paths of \(\varPi \).

3 Proof Rules for Persistence and Recurrence

In this section, we describe the main proof rules for persistence and recurrence properties. All proof rules involve finding a suitable “certificate” in the form of a stochastic analogue of a Lyapunov-like function over the state-space X. The soundness of our approach (presented in Sect. 4) relies on certificate functions behaving as supermartingale expressions over the state variables of the stochastic system.

Let X be the state-space of interest and \(T \subseteq X\) be a target set.

3.1 Proof Rules for Persistence

We provide a series of proof rules for proving persistence properties. The relation between these rules is examined in the extended version of the paper.

figure a

Both persist-geom and persist-add state that a polynomial stochastic system \(\varPi \) satisfies \(\Diamond \Box (T)\) almost surely if there exists a nonnegative certificate function V (condition (p1)) whose value outside T is lower bounded by some \(\varepsilon > 0\) (condition (p2)). Moreover, the drift conditions ensure that in expected value V in the next step does not increase inside T (condition (p3)), and decreases by some fixed non-zero quantity outside T (an additive constant in (p5), or, a multiplicative factor in (p4)). Intuitively, these conditions together guarantee that V is a supermartingale whose drift condition outside T forces its value to decrease along almost all sample paths and eventually reach a value \(\varepsilon \) at which point the sample path is “forced” to enter T and persists forever.

Applications.We present an application of each rule and defer soundness to Sect. 4.

Example 3

Consider a stochastic system \(\varPi \) with a single variable x over \(\mathbb {R}\), and a single transition: \(x' := 0.1 (1+ w)x\), where w is a standard Gaussian random variable. We show that the almost sure persistence property \(\Diamond \Box ( T : |x| \le 0.1)\) holds.

Consider the function \(m(x): x^2\), which is nonnegative on X and \(m(x) \ge 0.01\) for all \(x \in X\! \setminus \! T\) (conditions (p1), (p2)). Moreover, for all \(x \in X\), \(\mathtt {pre}\mathbb {E}(m,\varPi ) = \mathbb {E}_w(m(x_{n+1}) | x_n) = 0.02 x_n^2,\text { so }\mathcal {D}m(x) \le -0.98m(x)\). Hence, m(x) defines a 0.02-multiplicative supermartingale expression (conditions (p3), (p4)).

Applying persist-geom, we conclude that \(\Diamond \Box ( -0.1 \le x \le 0.1)\) holds a.s.

Example 4

For the polynomial stochastic system of Example 1 over the state-space \(X = [0, 1]\), we establish the almost sure persistence property \(\Diamond \Box (x \le 0.05 \vee x \ge 0.95)\).

Consider the certificate function \(V(x) = x(1-x)\). For all \(x \in X\), \(V(x) \ge 0\), and for all \(x \in X \setminus T = (0.05, 0.95)\), \(V(x) \ge 0.0475\) (conditions (p1), (p2)). Next, note that \(\mathtt {pre}\mathbb {E}(V,\varPi ) = x(1-x)(1-x+x^2)\), and \(\mathcal {D}V(x) = x(1-x)(x^2-x)\). It is easy to check that for all \(x \in (0.05, 0.95)\), \(\mathcal {D}V(x) \le -0.00225625\), and for all \(x \in [0, 0.05] \cup [0.95, 1]\), \(\mathcal {D}V(x) \le 0\) (conditions (p3), (p5)).

Applying persist-add, we conclude that \(\Diamond \Box (x \le 0.05 \vee x \ge 0.95)\) holds a.s.

Note 1

In both examples, the certificates m(x) and V(x) can be used in both persist-geom and persist-add: \(\mathcal {D}m(x) \le -0.98x^2 \le -0.0098\), for all \(x \in X\setminus T\), and \(\mathcal {D}V(x) \le x(1-x)(x^2-x) \le -0.0475V(x)\). We expand on this point next.

Strong Persistence. Rules persist-geom and persist-add present sufficient conditions under which certificates V prove that \(\Diamond \Box (T)\) holds almost surely. Unfortunately, the difference in constraints inside and outside T may force V to be a high degree polynomial (or a piecewise polynomial function). To simplify constraints and make the search for certificates for almost sure persistence properties tractable we propose a stronger version of proof rules for persistence of the form: \((\forall \ \varepsilon > 0)\ \Diamond \Box (V (\mathbf x ) \le \varepsilon )\).

figure b

Similarly, we provide an additive version of strong persistence rule. We say a function \(V(\mathbf x )\) has bounded increase over \(\varPi \) iff there is a constant \(C > 0\) such that, for every possible next state \(\mathbf x '\) reached from \(\mathbf x \) (i.e., .

figure c

sperist-geom presents a stronger, yet simpler to state and encode, version of the drift requirement dictated by persist-geom (viz. inside the region T). Similarly, spersist-add does not insist on V being positive definite but only V decreasing in expectation by \(-c\) everywhere. The benefit of the stronger formulations of the persistence rule is that each level set of the Lyapunov-like certificate V acts as a tail invariant: a set S that almost all traces of the stochastic system reach and asymptotically confine to.

Relations Between Proof Rules. In Note 1 we allude to the fact that certificates for rule persist-geom can equivalently be used for rule persist-add to prove persistence properties of polynomial stochastic systems. We state the main result of importance here and defer all proofs and relationship between proof rules to the extended version.

Theorem 1

Let \(m({\mathbf {x}})\) be an \(\varepsilon \)-additive supermartingale expression that has bounded increase in \(\varPi \). Then there exist positive constants \(\lambda > 1\) and \(\alpha < 1\) such that \(\lambda ^{m({\mathbf {x}})}\) is an \(\alpha \)-multiplicative supermartingale expression. Moreover, let \(\kappa \in \mathbb {R}\) be such that \(\{ {\mathbf {x}} \in X \ |\ m({\mathbf {x}}) \le \kappa \}\) is nonempty. Then the system \(\varPi \) satisfies the tail invariance property \(m({\mathbf {x}}) \le \kappa \) almost surely.

Under some technical conditions, it is possible to prove the converse of Theorem 1. This shows that any positive \(\alpha \)-multiplicative supermartingale expression used to prove a tail invariant property has an equivalent additive supermartingale (more precisely, SMRF) formulation and vice versa.

Incompleteness. We demonstrate that although sound, our approach is incomplete. The existence of a nonnegative \(\alpha \)-multiplicative supermartingale expression or a SMRF of bounded increase is sufficient but not a necessary condition for the system to almost surely satisfy a tail invariant property. Example 8 in Sect. 4.2 demonstrates this result.

3.2 Proof Rule for Recurrence

We now focus on proof rules for proving the almost sure recurrence property: \(\Box \Diamond (T)\), i.e., T is visited infinitely often by almost all sample paths. The proof rule is almost identical to a related rule that establishes “positive recurrence” in Markov chains [24].

figure d

4 Soundness of Proof Rules

4.1 Supermartingales as Certificates of Geometric Persistence Rules

We state two theorems that formally relate supermartingales and persistence properties. The first establishes the convergence of nonnegative \(\alpha \)-multiplicative supermartingales.

Theorem 2

Let \(\mathcal {M} = \{ M_i \}_{i=0}^{\infty }\) be nonnegative \(\alpha \)-multiplicative supermartingale for some \(\alpha \in (0, 1)\). Then \(\mathcal {M}\) converges almost surely (samplewise) to 0.

This result can be applied directly to stochastic transition systems. Let \(m(\mathbf x )\) be a nonnegative \(\alpha \)-multiplicative supermartingale expression for a transition system \(\varPi \) for some \(\alpha \in (0,1)\). For a sample path \(\{ \mathbf x _i \}_{i=0}^{\infty }\) of \(\varPi \), we say that \(m(\mathbf x _i)\) upcrosses a level \(\kappa > 0\) iff \(m(\mathbf x _i) \le \kappa \) and \(m(\mathbf x _{i+1}) > \kappa \). If \(m(\mathbf x )\) converges almost surely to 0 on all sample paths, then the number of upcrossings of \(m(\mathbf x )\) on any sample path is a.s. finite.

Lemma 3

Let \(m({\mathbf {x}})\) be a nonnegative \(\alpha \)-multiplicative supermartingale expression for a polynomial stochastic system \(\varPi \). Then for all \( \kappa > 0\), the number of \(\kappa \)-upcrossings of \(m({\mathbf {x}})\) is almost surely finite, i.e.,

$$ \mathrm {Pr}\bigl (\big \{\omega \in \Omega \ \big |\ \{i\,|\, m(\pi _i(\omega ))\le \kappa \wedge m(\pi _{i+1}(\omega )) >\kappa \} \text { is finite} \big \}\bigr ) = 1. $$

Proof

The result follows directly from the almost sure convergence of \(m(\mathbf x )\) to zero on sample paths of \(\varPi \). \(\square \)

This means that for any threshold \(\kappa >0\), \(m(\mathbf x ) \le \kappa \) is a tail invariant property: i.e., \(\Diamond \Box (\varphi : m(\mathbf x ) \le \kappa )\) holds almost surely.

Theorem 3

(Soundness of persist-geom ). A polynomial stochastic system \(\varPi \) satisfies the almost sure persistence property \(\Diamond \Box (T)\) if there exists a function V that satisfies conditions (p1)-(p4) of persist-geom.

Necessity. The two conditions on the multiplicative supermartingale \(m(\mathbf x )\) are \(\alpha \in (0,1)\) and \(m(\mathbf x )\) nonnegative. We show their necessity through the following example.

Example 5

Consider a stochastic transition system with a single variable x defined over the state space \( [0,\infty )\) with two transitions \(\tau _1\) and \(\tau _2\). Transition \(\tau _1\) has a guard \(x \ge 1\) and does not alter the value of x. Transition \(\tau _2\) has a guard \(x < 1\) and chooses between \(x' := 2x\) or \(x' := \frac{x}{2}\) with equal probabilities. That is:

$$ x' := {\left\{ \begin{array}{ll} x &{} \text{ if } x \ge 1\text{, } \text{ and } \\ 2x &{} \text{ if } x< 1, \text{ with } \text{ prob. } \frac{1}{2} \\ \frac{x}{2} &{} \text{ if } x < 1, \text{ with } \text{ prob. } \frac{1}{2}\\ \end{array}\right. } $$

The initial value of x is exponentially distributed over \([0,\infty )\). Note that m(x) : x is a nonnegative \(\alpha \)-multiplicative supermartingale only when \(\alpha = 1\). Clearly, m does not converge almost surely to zero.

Consider another transition system involving \(x \in \mathbb {R}\) having two forks: \(x' := x\) with probability \(\frac{2}{3}\), and \(x' := -x\) with probability \(\frac{1}{3}\). Clearly, m(x) : x is a \(\frac{1}{3}\)-multiplicative supermartingale. However, m is not nonnegative over the state-space \(\mathbb {R}\), so m does not prove any persistence property. Indeed, \(\Diamond \Box (x\! \le \! 1)\) is not a tail invariant for the system.

4.2 Supermartingales as Certificates of Additive Persistence Rules

Recall that an additive supermartingale expressions \(m(\mathbf x )\) of \(\varPi \) satisfies the condition

$$\begin{aligned} (\forall \ \mathbf x \in X)\ \mathtt {pre}\mathbb {E}(m(\mathbf x ), \varPi ) \le m(\mathbf x ) - \varepsilon , \end{aligned}$$

for some constant \(\varepsilon > 0\). (See Definition 7.) Given an additive supermartingale expression m, let \(M_\kappa : \{ \mathbf x \in X |\ m(\mathbf x ) \le \kappa \}\). For any \(\kappa \) where \(M_\kappa \ne \emptyset \), we can prove \(\Diamond ( M_\kappa )\) holds a.s. [8, 10, 15]. Yet in general, the property \(\Diamond \Box (M_\kappa )\) does not hold a.s.

Example 6

( MoonWalk ) A MoonWalk system consists of a random walk over the state-space \(X: \mathbb {Z}_{\le 0}\) of the nonpositive integers:

$$ x_{n+1} :=\left\{ \begin{array}{ll}\small x_n - 1\ \ &{} \text { with prob. }\ p(x_n), \\ 0 &{} \text { with prob. }\ 1 - p(x_n), \end{array} \right. $$

wherein \(p(x): \frac{ x - 0.5}{ x - 1} = 1 - \frac{0.5}{1-x}\), for \(x < 0\), and \(p (0) = 1\). In other words, the random walk either chooses to decrease x by 1 with probability p(x) or jumps to 0 with probability \(1-p(x)\). The initial state follows a negative Poisson distribution.

The function m(x) : x is an additive supermartingale expression for the MoonWalk system: for \(x_n < 0\), \(\mathbb {E}(m(x_{n+1}) | x_n) = x_n - 0.5\), and \(\mathbb {E}(m(x_{n+1}) | 0) = x_n - 1\). Yet the sublevel sets of the function m cannot be used for establishing persistence properties, because of the following result.

Lemma 4

For any \(\eta <0\), the probability that a sample path of MoonWalk satisfies \(\Diamond \Box ( x \le \eta )\) is 0.

Using an additive supermartingale expression m to prove tail invariance properties of the form \(\Diamond \Box (m(\mathbf x ) \le \kappa )\) requires additional assumptions on the expression m. One such condition is the bounded increase (which was assumed in Theorem 1, for establishing the soundness of proving persistence properties via additive supermartingales).

Definition 9

(Bounded Increase Expression). An expression \(m({\mathbf {x}})\) has bounded increase for a stochastic transitions system \(\varPi \) iff there exists \(M>0\) so that for all possible states \({\mathbf {x}} \in X\) and all possible next states \({\mathbf {x}}'\) reachable from \({\mathbf {x}}\), \(|m({\mathbf {x}}') - m({\mathbf {x}})| \le M\).

We give an example of bounded increase expressions, which do not have to be bounded functions: whether a particular expression \(m(\mathbf x )\) has bounded increase on a system depends as much on the system itself as on the growth of m.

Example 7

Consider a stochastic system over \(\mathbb {R}\), in which \(x_{n+1} := x_n - 1 + w_n\), with \(w_n\) a uniform random variable over \([-1,1]\). Then the function m(x) : x has bounded increase property.

If each \(w_n\) is a Gaussian random variable, then m(x) does not satisfy the bounded increase property. Restricting the set of support of distribution \(w_n\) to a compact set (by truncation), however, allows x to satisfy the bounded increase property again.

Returning to the MoonWalk system in Example 6, the additive supermartingale m(x) : x, whose sublevel sets do not prove any tail invariance property (due to Lemma 4), does not satisfy the bounded increase property since it is possible to move from \(x = -j\), for any \(j > 0\), to \(x=0\) with nonzero probability.

We close the section by demonstrating the incompleteness: an additive supermartingale does not always need the bounded increase property for a tail invariant property to be established.

Example 8

(Incompleteness). Consider the MoonWalk system with modified probability \(p(x) : 1 - \frac{0.5}{(x-1)^2}\), for \(x < 0\), and \(p(0) = 1\). The probability of the event \(\{x > \kappa \}\) is \(\sum _{j=-\kappa }^{\infty }\frac{0.5}{(j+1)^2}\), which converges. By the Borel-Cantelli Lemma [14, 2.3.1], \(\mathrm {Pr}(x > \kappa \ \text {i.o.}) = 0\) holds, i.e., the tail invariant \(\Diamond \Box (x < \kappa )\) holds; however, the system does not have the bounded increase property.

5 Implementation and Evaluation

Given a polynomial stochastic system and a semi-algebraic target set, the problem of finding “certificates” V that prove persistence or recurrence properties is in general intractable. In practice, we impose several restrictions on the proof rules so that their solutions are tractable, based on sum-of-squares (SOS) optimization techniques (see e.g. [4, 6] and the references therein). For illustration, we only focus on spersist-geom and persist-geom; the formulations for the other proof rules are similar.

Recall that for proving strong persistence properties via the geometric rule spersist-geom, we need to find a function V such that conditions (p1), (p6) hold. We impose the following restrictions to make the feasibility problem tractable. First, we require that V is a polynomial of degree at most some integer d. This means that \(\mathcal {D}V\) is also a polynomial, which can be expressed in terms of the coefficients of V and the moments of the random variable \(\mathbf r _c\). Second, we replace the nonnegativity constraints by the more restrictive sum-of-squares (SOS) constraints, i.e., we require that both V and \(-\mathcal {D}V\) be sums of squares of some unknown polynomial functions. We also require that V is positive definite, which is a common regularity condition assumed in semidefinite optimization and allows us to find an \(\alpha \in (0,1)\) such that the condition (p6) in spersist-geom holds. Under these two restrictions, the generally intractable feasibility problem from spersist-geom is equivalent to a linear semidefinite feasibility problem: a polynomial being a sum of squares (of polynomial functions) is equivalent to its vector of coefficients being the image of an unknown positive semidefinite matrix under a predetermined linear transformation. (For more details on SOS relaxation techniques for solving polynomial feasibility/optimization problems, see e.g. [4, 6].)

For proving persistence properties with respect to a nonempty set T via the geometric rule persist-geom, we need to find a function V such that conditions (p1)-(p4) hold. Again, we require that V is a polynomial of degree at most d, and that for some polynomials \(g_1,\ldots , g_{\ell }\). Then we replace those constraints pertaining to the elements in T, by truncated quadratic module membership. For instance, we replace the condition (p3) by the tractable constraint:

$$ \mathcal {D}V = s_0 + s_1 g_1 + \cdots +s_\ell g_\ell , \ \ s_0,s_1,\ldots ,s_\ell \text { SOS of degree at most some integer }\tilde{d}. $$

(The tractability is due to the fact that the polynomials \(s_i\) being SOS can be phrased as semidefinite feasibility constraints.) Similar treatment can be applied on those constraints pertaining to the elements in \(X\setminus T\), which is also a semialgebraic set.

Many standard semidefinite optimization solversFootnote 1 and SOS optimization front-ends (such as SOSTOOLS [25]) are available for solving the SOS optimization problems outlined above. Below we present some simple examples on the use of spersit-geom rules for proving persistence. In each example, an \(\alpha \)-multiplicative supermartingale expression is obtained using SDPT3-4.0 [31] on MATLAB R2014b, taking less than 10 seconds on a Linux machine with Intel(R) Core(TM) i7-4650U CPU @ 1.70GHz.

Example 9

(Rimless wheel model [7, 22, 27]). A rimless wheel with 8 equally spaced inelastic spokes of length L rolls down a hill with stochastic slope angle \(\gamma \). Let \(\omega _n\) be the angular velocity at the n-th impact (which occurs when the stance leg is vertical). In [7, 27], the dynamics of the rimless wheel is described as:

$$ \textstyle x_{n+1}:=\cos ^2\theta \left( x_n + \frac{2g}{L}\left( 1- \cos \left( \frac{\theta }{2}+\gamma \right) \right) \right) - \frac{2g}{L}\left( 1- \cos \left( \frac{\theta }{2}-\gamma \right) \right) , $$

where \(x_n= \omega _n^2\), g is the gravitational constant, \(\theta =45^\circ \) is the angle between two consecutive spokes and \(\gamma \sim \mathcal {N}(8,1)\) (in degrees). We approximate the functions \(\xi \mapsto \cos (\frac{\theta }{2}\pm \xi )\) over the interval [5, 11] by degree 2 polynomials, and find that the angular velocity in the approximated stochastic system goes to 0 almost surely when \(L=2g\): the function \(V(x): 0.00085 x^3 + x^4\) satisfies the conditions (p1), (p6) with \(X:[0,\infty )\) and \(\alpha =0.95\): V and \(-\mathcal {D}V\) are nonnegative on X and \(\mathcal {D}V(x) \le -0.05 V(x)\) for all \(x\ge 0\). Hence V is a \(\alpha \)-multiplicative supermartingale for this system over X, and \(\Diamond \Box (V(x) \le \varepsilon )\) holds a.s. for any \(\varepsilon >0\). In other words, despite the randomness in the slope of the terrain, the rolling rimless wheel (with very long spokes) would eventually become stationary almost surely.

Example 10

(Room temperature control [3]). In the two-room temperature control example from Sect. 1.1, we are interested in the evolution of the room temperatures within the range \(X= [6, 33]^2\). Consider the nonnegative function \(V(x_1, x_2): (x_1-18.3)^2+(x_2-18.8)^2\). When the noise follows the uniform distribution \(\mathcal {U}(-0.01,0.01)\), \(\mathcal {D}V\) is nonpositive on X, and \(V(x_1,x_2) \ge 0.09\) and \(\mathcal {D}V(x_1,x_2) \le -0.01 V(x_1,x_2)\) for all \(x\in X\) \(\backslash \) \([17.8, 18.7]\times [18.4,19.3]\). Hence conditions (p1)-(p4) hold, implying the persistence property \(\Diamond \Box ( 17.8\le x_1 \le 18.7 \wedge 18.4\le x_2 \le 19.3 )\).

In the case of Gaussian noise \(\mathcal {N}(0,0.25)\), \(\mathcal {D}V(x_1,x_2) \le 0.25\) for all \((x_1,x_2)\in [16.9,19.6]\times [17.3,20.2]\), and \(V(x_1,x_2)\ge 0.8\) and \(\mathcal {D}V(x_1,x_2) \le -6\times 10^{\text{- }5}\) for all \((x_1,x_2)\in X\setminus [16.9,19.6]\times [17.3,20.2]\). Hence conditions (r1)-(r4) hold, implying the recurrence property \(\Box \Diamond (16.9\le x_1\le 19.6 \wedge 17.3\le x_2 \le 20.2)\).

We list some additional examples in which a system is proved to satisfying a persistence or recurrence property via some of the proof rules from Sect. 3. The details of these examples can be found in the extended version of the paper.

Additional Stochastic Systems

Noise \(u_j\) (i.i.d.)

Supermartingale V(xy)

\(\begin{array}{rl} x' := x + \tfrac{1}{2} y + u_1,\\ y' := \tfrac{1}{2} x + y - u_2 \end{array} \)

\(\mathcal {N}(-1,1)\)

\(\max (x-y,0)\)

(over \(X=\mathbb {R}^2\))

 

(for proving recurrence)

\( \begin{array}{rl} x' := &{} 0.5 (x + y) + 0.4 u_{1}\sqrt{x^2+y^2},\\ y' := &{} 0.5 (x - y) + 0.4 u_{2}\sqrt{x^2+y^2}, \end{array} \)

\(\mathcal {N}(0,1)\)

\(x^2+y^2\)

(over \(X=\mathbb {R}^2\))

 

(0.82-multi.)

\( \begin{array}{rl} x' := &{} 0.75y^4+0.1u_{1},\\ y' :=&{} 0.75x^4+0.1u_{2}, \end{array}\)

\(\mathcal {U}(-1,1)\)

\(0.78x^2+1.23xy+0.78y^2\)

(over \(X=\{(x,y)\,|\,x^2+y^2\le 1\}\))

 

(0.75-multi.)

\( \begin{array}{rl} x' := &{} 0.1(y(3x^2+2y^2-0.5)+u_{1}\sqrt{x^2+y^2}),\\ y' :=&{} 0.1(y(2x^2+4xy+3y^2-0.5)+u_{2}\sqrt{x^2+y^2}), \end{array} \)

\(\mathcal {U}(-\sqrt{3},\sqrt{3})\)

\(1.55x^2+2.36xy+1.34y^2\)

(over \(X=\{(x,y)\,|\,x^2+y^2\le 1\}\))

 

(0.5-multi.)

6 Related Work

Martingale analysis has been used to prove almost sure termination [8, 10, 15], derive inductive invariant expressions [9] in probabilistic programs, and prove stochastic reachability and safety [26, 27] in the context of stochastic hybrid systems. Our paper extends this set of properties to include tail invariant, or qualitative persistence, properties.

Qualitative persistence properties are expressible in PCTL [18] and have been studied by the model-checking community [5, 21] in the context of finite-state Markov processes. The approach leverages the fact that the system ends up a.s. in a bottom strongly connected component (BSCC [1113], a strongly connected component with no outgoing edges), then uses a graph algorithm to efficiently check that all states in the BSCC satisfy \(\varphi _\textsc {inv}\). Baier et al. [5] have shown that while such results suffice for the analysis of finite-state Markov chains, they, however, do not extend to infinite-state models.

Abate [1, 2] and Tkachev et al. [29] present approaches to reducing the verification problem of infinite-state Markov processes to that over finite-state Markov chains. Specifically, [1, 2] presents a framework for proving probabilistic bisimulation between the original infinite-state system and its discretized (approximate) finite-state version. Guarantees on the quality of the results are proved using supermartingale bisimulation functions. Tkachev et al. [29] present a framework for analyzing infinite-horizon reach-avoid properties (\(\Diamond \varphi \) and \(\Box \varphi \)) for Markov processes. They use locally-excessive (i.e., supermartingale) value functions to identify a subset of the state-space where discretization guarantees a precise approximate solution. In [28, 30] Tkachev et al. tackle quantitative reachability, invariance and reach-while-avoid properties operating directly over the infinite-state model. [28] provides a characterization of the statespace based on harmonic functions defining absorbing or stochastically attractive sets. Unfortunately, the sufficient conditions for certificates in [28] define problems that in general have no analytical or computation solution. Our paper can be seen as a set of practical sufficient conditions that yield efficiently computable problems (SOS, SDP) for qualitative repeated reachability.

The problem of stability and identifying the limiting behavior of a stochastic system has been well-studied in the theory of Markov chains [24]. Similar “Foster-Lyapunov” [16] drift criteria have been derived to argue recurrence and transience for sets of states. Unfortunately, most results rely on the topological properties of the infinite-state Markov chains that may be difficult to check automatically.

7 Conclusion

We presented an analysis framework capable of proving that the limiting behavior of an infinite-state discrete-time polynomial stochastic system eventually settles almost surely within a region S of the statespace (i.e., \(\Diamond \Box (S)\)). Our analysis employs constraint-based invariant generation techniques to efficiently infer polynomial functions over the states of the system: nonnegative \(\alpha \)-multiplicative supermartingale expressions and additive supermartingale expressions of bounded increase. We established that both types of functions constitute certificates verifying tail invariant properties and we demonstrated their equivalence. Finally, we highlighted the individual strengths of each of the two types but also the incompleteness of the general approach through the means of numerous simple, yet intricate examples.