# Safety Verification of Continuous-Space Pure Jump Markov Processes

- 2 Citations
- 1.3k Downloads

## Abstract

We study the probabilistic safety verification problem for pure jump Markov processes, a class of models that generalizes continuous-time Markov chains over continuous (uncountable) state spaces. Solutions of these processes are piecewise constant, right-continuous functions from time to states. Their jump (or reset) times are realizations of a Poisson process, characterized by a jump rate function that can be both time- and state-dependent. Upon jumping in time, the new state of the solution process is specified according to a (continuous) stochastic conditional kernel. After providing a full characterization of safety properties of these processes, we describe a formal method to abstract the process as a finite-state discrete-time Markov chain; this approach is formal in that it provides a-priori error bounds on the precision of the abstraction, based on the continuity properties of the stochastic kernel of the process and of its jump rate function. We illustrate the approach on a case study of thermostatically controlled loads.

## Keywords

Transition Kernel Jump Time Continuous State Space Gaussian Density Function Demand Response Program## 1 Introduction

Stochastic processes evolving in continuous time are used to model many phenomena in science and engineering. In recent years, there has been a lot of work in the algorithmic analysis and formal verification of such models with respect to quantitative temporal specifications. For example, the verification of continuous-time Markov chains over finite state spaces has been widely addressed in the literature against properties expressed in temporal logics such as CSL [5, 6, 7], MTL [13], and timed-automata specifications [14], and there exist efficient software tools [24, 27].

In this paper, we extend this line of work and study the class of continuous-space, pure jump Markov processes (cPJMP, for short). A cPJMP evolves in continuous time. The process starts at state \(X_{t_0} = x_0\) at time \(t = t_0\) and waits until a random time \(t = T_1\), governed by a Poisson process depending on \(x_0\) and possibly time-inhomogeneous, when it makes a jump to a new state \(X_{T_1} = x_1\) based on a transition kernel that is conditional on the jumping time and on \(x_0\). Then it waits until time \(t= T_2\), when it makes another jump to state \(X_{T_2} = x_2\) with probability that depends on the current time and on \(x_1\), and so on. The states take values over a continuous domain, hence the transition kernel induces continuous measures.

cPJMPs generalize continuous-time, finite-state Markov chains (CTMCs) by allowing time-inhomogeneous behaviors (the waiting times and transition probabilities can depend on time) and allowing for general, continuous state spaces. Correspondingly, non-deterministic extensions of cPJMPs (not explicitly discussed in this work, but directly obtainable from cPJMPs) extend general-space MDPs [10] and LMPs [30] by allowing a random inter-arrival time in between stochastic resets over their continuous state space. cPJMPs can be employed in the definition and analysis of jump-diffusion processes [25]: of interest to this work, the jump component can capture event-driven uncertainties, such as corporate defaults, operational failures, or insured events [31]. It is likewise possible to obtain a cPJMP by random time sampling of a general stochastic differential equation (SDE) – indeed cPJMPs can be as well thought of as SDEs with jumps, with drift and diffusion terms that are equal to zero. This connection with diffusions driven by Wiener processes renders cPJMP relevant to areas including financial and economic modeling [31], systems biology [4], physics and chemistry [34].

We study the problem of approximately computing the bounded-time safety probability of a cPJMP by generalizing the corresponding algorithms for CTMCs. First, we show that a cPJMP can be embedded into a discrete-time, continuous-space Markov process (DTMP). In this process, we “compile away” the time inhomogeneity of the process by explicitly modeling the time as an additional state variable. Second, we characterize the bounded-time safety probability of the discrete-time Markov process as the least fixed point solution of a system of integral equations that generalize the Bellman equations for CTMCs. Finally, under Lipschitz continuity assumptions on the jump rate function and on the jump measure of the cPJMP, we show how the continuous-space discrete-time Markov process can be approximated by a finite-state discrete-time Markov chain (DTMC), up to any desired degree of precision. Our technical result shows a guaranteed upper bound on the error incurred in computing the bounded-time safety probability introduced by the finite-state approximation.

While we focus on bounded-time safety probability computation, our algorithms can be generalized to provide approximate model checking algorithms for more expressive temporal logics such as continuous-time stochastic logic (CSL) [5, 9]. We demonstrate our results on a case study from energy systems, modeling thermostatically-controlled loads as a cPJMP.

## 2 Pure Jump Markov Processes in Continuous Time

### 2.1 Model Definition - Syntax and Semantics

*K*is the (not necessarily finite) state space and \(\mathcal K\) is a sigma-algebra on

*K*. Let \(\varOmega \) be a sample space. Let \({\mathbb R}_{\ge 0}\) be the set of non-negative reals. We consider stochastic processes \(X:\varOmega \times {\mathbb R}_{\ge 0}\rightarrow K\) in continuous time. For any \(t\in {\mathbb R}_{\ge 0}\), the function \(X(\cdot ,t):\varOmega \rightarrow K\) is a random variable, which we denote by \(X_t\). For every \(I\subseteq \mathbb R_{\ge 0}\) we write \(\mathcal F_I = \sigma (X_t,t\in I)\) for the sigma-algebra on \(\varOmega \) generated by the stochastic process

*X*restricted to the index set

*I*. We suppose that for every \(t\in \mathbb R_{\ge 0}\) and \(x\in K\), a probability \(\mathbb P^{t,x}\) is given on \((\varOmega ,\mathcal F_{[t,\infty )})\). The stochastic process \(X: \varOmega \times {\mathbb R}_{\ge 0} \rightarrow K\) is a (pure) jump Markov process if the following conditions hold:

- (a)
\(\mathcal K\) contains all one-point sets and \(\mathbb P^{t,x}(X_t = x) = 1\) for every \(t\in \mathbb R_{\ge 0}\), \(x\in K\).

- (b)
For every \(0\le t\le s\) and \(A\in \mathcal K\) the function \(x\mapsto \mathbb P^{t,x}(X_s\in A)\) is \(\mathcal K\)-measurable.

- (c)
**[Markov property]**For every \(0\le u \le t\le s\), \(A\in \mathcal K\) we have \(\mathbb P^{u,x}(X_s\in A|\mathcal F_{[u,t]}) = \mathbb P^{t,X_t}(X_s\in A)\), \(\mathbb P^{u,x}\)-a.s. - (d)
**[Pure Jump property]**For every \(\omega \in \varOmega \) and \(t\ge 0\) there exists \(\delta >0\) such that \(X_s(\omega ) = X_t(\omega )\) for \(s\in [t,t+\delta ]\); this is equivalent to requiring that all the trajectories of*X*are càdlàg [11] when*K*is given the discrete topology (where all subsets are open). - (e)
**[Non-explosive property]**For every \(\omega \in \varOmega \) the number of jumps of the trajectory \(t\mapsto X_t(\omega )\) is finite on every bounded interval.

Condition (a) enables us to assign probabilities to any points \(x\in K\). In particular, the probability measure \(\mathbb P^{t,x}\) assigns probability 1 to *x*, so that the process is initialized deterministically at *x* at time *t*. Condition (b) is essential for transporting any probability measure on \(X_t\) to the events \(X_s\in A,\,\,A\in \mathcal K\), for any \(t\le s\).

*Q*of the first jump time \(T_1\) and of the corresponding position \(X_{T_1}\). To proceed formally, we first fix \(t\ge 0\) and \(x\in K\) and define the first jump time

*t*. Its associated probability measure also depends on

*x*through \(\mathbb P^{t,x}\). Allowing this jump time to be equal to infinity requires extending the definition of the process

*X*as follows. Take an extra dummy point \(\varDelta \notin K\) and redefine \(X:\varOmega \times {\mathbb R}_{\ge 0}\cup \{\infty \}\rightarrow K\cup \{\varDelta \}\) such that \(X(\omega ,\infty ) = X_\infty (\omega ) = \varDelta \) for all \(\omega \in \varOmega \). Then \(X_{T_1}:\varOmega \rightarrow K\cup \{\varDelta \}\) is well defined. Note that \(X_{T_1}\) is associated with a probability measure first through the random variable \(T_1\) (the first jump time) and then through the process

*X*conditioned on knowing this jump time.

On the extended space \(S:=\left( \mathbb R_{\ge 0}\times K\right) \cup \{(\infty ,\varDelta )\}\) we consider the smallest sigma-algebra, denoted by \(\mathcal S\), containing \(\{(\infty ,\varDelta )\}\) and all sets of \(\mathcal B(\mathbb R_{\ge 0})\otimes \mathcal K\) (here and in the following \(\mathcal B(\varLambda )\) denotes the Borel sigma-algebra of a topological space \(\varLambda \), and \(Y\otimes Z\) is the product sigma-algebra of two sigma-algebras *Y*, *Z*, that is the smallest sigma-algebra generated by subsets of the form \(A_1\times A_2\), \(A_1\in Y,A_2\in Z\)). Note that this sigma-algebra \(\mathcal S\) is smaller than the product of two sigma-algebras defined on \(\mathbb R_{\ge 0}\cup \{\infty \}\) and \(K\cup \{\varDelta \}\). The extended process *X* ensures that \(\mathcal S\) is sufficient to contain the associated probability measure of \((T_1,X_{T_1})\). With these definitions, \((T_1,X_{T_1})\) is a random variable with values in \((S,\mathcal S)\), and its law under \(\mathbb P^{t,x}\) is denoted by \(Q(t,x,\cdot )\).

*Q*is constructed starting from a given transition measure from \(\mathbb R_{\ge 0}\times K\) to \(\mathcal K\), called

*rate measure*and denoted by \(\nu (t,x,A),t\in \mathbb R_{\ge 0},x\in K,A\in \mathcal K\). We require that \(A\mapsto \nu (t,x,A)\) is a positive measure on \(\mathcal K\) for all \(t\in \mathbb R_{\ge 0}\) and \(x\in K\), and that \((t,x)\mapsto \nu (t,x,A)\) is \(\mathcal B(\mathbb R_{\ge 0})\otimes \mathcal K-\)measurable for all \(A\in \mathcal K\). We also assume that the rate measure \(\nu \) satisfies the two conditions

- (f)
\(\sup \{\nu (t,x,K)|t\in \mathbb R_{\ge 0},x\in K\}<\infty \) and

- (g)
\( \nu (t,x,\{x\}) = 0\) for all \(t\in \mathbb R_{\ge 0},x\in K.\)

*A*. Therefore \(\lambda \) is a nonnegative bounded measurable function and \(\pi \) is a transition probability on

*K*satisfying

*x*. Function \(\lambda \) is called the

*jump rate function*, and \(\pi \) the

*jump measure*. Note that we have \(\nu (t,x,A) = \lambda (t,x)\pi (t,x,A), \forall t\in \mathbb R_{\ge 0},x\in K,A\in \mathcal K\). Given the rate measure \(\nu \), we require that for the Markov process

*X*we have, for \(0\le t\le a<b\le \infty , x\in K, A\in \mathcal K\),

*Q*was described above as the law of \((T_1,X_{T_1})\) under \(\mathbb P^{t,x}\). Note that (2) completely specifies the probability measure \(Q(t,x,\cdot )\) on \((S,\mathcal S)\): indeed simple computations show that

### 2.2 Examples and Related Models

### *Example 1*

**Poisson-driven differential equation**[31, Section 1.7]. Let the process \({\{ \mathcal N_t\mid t\ge 0 \}}\) represent a standard Poisson process with homogeneous rate \(\lambda \).

^{1}Consider a pure jump process \(X = \{X_t,\,\, t\in \mathbb R_{\ge 0}, X_t \in \mathbb R \}\), driven by the Poisson process \(\mathcal N_t\), where its value \(X_t\) at time

*t*satisfies the SDE

*jump coefficient*. For the case of \(c(t,x) = c_0 x\) with the constant \(c_0\in \mathbb R_{\ge 0}\) and initial value \(X_0>0\), the process has an explicit representation

^{2}

*T*]. This probability is analytically computable for the above simple process: defining \(\beta _{\mathfrak h} = \left( \ln \alpha _{\mathfrak h}-\ln X_0\right) /\ln (c_0+1)\), this probability is \(\sum _{n=0}^{n\le \beta _{\mathfrak h}}e^{-\lambda T}(\lambda T)^n/n!\). \(\square \)

### *Example 2*

**Compound Poisson processes**[31, Section 1.1] represent a generalization of Poisson processes, with exponential waiting times between jumps but where jump sizes, rather than being deterministic, follow an arbitrary distribution. Let \(\{y_n\}_{n\ge 1}\) be a sequence of independent random variables with distribution \(\mu \) for all \(n\ge 1\) and assume that the standard Poisson process \({\{ \mathcal N_t\mid t\ge 0 \}}\) with parameter \(\lambda >0\) is independent of \(\{y_n\}_{n\ge 1}\). The compound Poisson process \(X_t\) is represented in the form \(X_t = \sum _{n=1}^{\mathcal N_t} y_n\). A typical application of compound Poisson processes is to model the aggregate claim up to time

*t*generated by a portfolio of insurance policies, where the individual claims are distributed according to \(\mu \). Let us assume the gamma distribution \(y_n\sim \varGamma (a,b)\) for the individual claims [35] and answer the same safety question as in the previous example: what is the probability that the aggregate claim does not exceed \(\alpha _{\mathfrak h}>0\) in the time interval [0,

*T*]? This probability is also analytically computable, and results in

Notice that the safety probability is expressible analytically in the above two examples. This is first because the trajectories of the solution are always non-decreasing, and secondly since the distribution of the solution process conditioned on the value of the underlying Poisson process is computable analytically. Unfortunately in general trajectories of cPJMPs cannot be derived explicitly, and as such the safety probability is not analytically expressible. In Sect. 3 we provide a general characterization of the solution of the probabilistic safety problem. In Sect. 4 we also work out a formal approximation method to numerically compute the solution.

### *Example 3*

**Continuous-time Markov chains**[8]. The class of cPJMP we consider includes, as special cases, all the time-homogeneous, nonexplosive, jump Markov processes: these correspond to a function \(\nu \) not depending on the time variable

*t*. Within this time-homogeneous case we need to retain the boundedness assumption in (f) for the rate function. Assuming further that

*K*is a finite or countably infinite set, we obtain the class of continuous-time Markov chains characterized by the transition rates matrix \(\nu (x,\{y\})_{x,y\in K}\), namely

*T*] can be expressed as the solution of a system of integral equations [7], which is a special case of the Bellman fixed-point equation developed in Sect. 3 for cPJMPs, but not expressible in closed form. \(\square \)

### *Example 4*

**cPJMP defined by dynamical systems.**Consider a process

*X*with piecewise-constant trajectories, which resets (or jumps) at time

*t*over a space

*K*according to a vector field \(f:K\times \mathbb R^n\times \mathbb R_{\ge 0}\rightarrow K\), so that

*t*and on the continuous state of the process

*x*(

*t*). Notice that the dependence of the vector field

*f*on time is in accordance with [25]. The map

*f*, together with the distribution of the process \(\zeta (\cdot )\), uniquely defines a jump measure \(\pi (t,x,A)\), which gives the probability of jumping from any state

*x*at time

*t*to any (measurable) subset of the state space \(A\subseteq K\) [26, Proposition 7.6]:

### 2.3 Embedded Discrete-Time Markov Process of a cPJMP

We have defined a cPJMP on a measurable space \((K,\mathcal K)\) through the transition measure \(\nu \). The trajectories of a cPJMP are piecewise constant, which makes them fully representable by their jump times and corresponding values. It is worth studying the properties of the random variables \((T_n(w),X_{T_n}(w))\), \(n\in \mathbb N:= \{0,1,2,\ldots \}\), where \(T_n\) is the \(n^{\text {th}}\) jump time and \(X_{T_n}\) is the corresponding value of the process. The ensuing Theorem 1 states that \((T_n,X_{T_n})_{n\in \mathbb N}\) can be considered as a discrete-time Markov process (DTMP) by slight extension of the definition of *Q*. The discrete time is indexed by nonnegative natural numbers \(n\in \mathbb N\), as opposed to continuous time indexed by \(t\in \mathbb R_{\ge 0}\).

### **Definition 1**

A discrete-time Markov process \((Y_n)_{n\in \mathbb N}\) is uniquely defined by a triple \(\mathfrak D = (E_{\mathfrak y},\mathcal E_{\mathfrak y}, P_{\mathfrak y})\), where \((E_{\mathfrak y},\mathcal E_{\mathfrak y})\) is a measurable space and \(P_{\mathfrak y}:E_{\mathfrak y}\times \mathcal E_{\mathfrak y}\rightarrow [0,1]\) is a transition kernel such that for any \(y\in E_{\mathfrak y}\) and \(A\in \mathcal E_{\mathfrak y}\), \(P_{\mathfrak y}(y,A)\) gives the probability that \(Y_{n+1}\in A\), conditioned on \(Y_n = y\). \(E_{\mathfrak y}\) is called the state space of the DTMP \(\mathfrak D\) and the elements of \(E_{\mathfrak y}\) are the states of \(\mathfrak D\). The process is time-inhomogeneous if \(P_{\mathfrak y}\) depends also on the time index *n*.

We adapt the following result from [23, Chapter III, Section 1, Theorem 2].

### **Theorem 1**

Starting from \(T_0 = t\) define inductively \(T_{n+1} = \inf \{s>T_n\,:\, X_s\ne X_{T_n}\}\), with the convention that \(T_{n+1} = \infty \) if the indicated set is empty. Under the probability \(\mathbb P^{t,x}\), the sequence \((T_n,X_{T_n})_{n\in \mathbb N}\) is a DTMP in \((S,\mathcal S)\) with transition kernel *Q*, provided we extend the definition of *Q* making the state \((\infty ,\varDelta )\) absorbing, by defining \(Q(\infty ,\varDelta ,\mathbb R_{\ge 0}\times K) = 0, Q(\infty ,\varDelta ,\{(\infty ,\varDelta )\}) = 1\).

Note that \((T_n,X_{T_n})_{n\in \mathbb N}\) is time-homogeneous, although in general *X* is not.

Theorem 1 states that given the stochastic process \(X:\varOmega \times \mathbb R_{\ge 0}\rightarrow K\) with probability measure \(\mathbb P^{t,x}\) as defined in Sect. 2, we can construct a DTMP on \((S,\mathcal S)\) with the extended transition kernel *Q*, whose state includes jump times and jump values of *X*. The inverse is also true, as described next, which allows for a simple description of the process *X*. Suppose one starts with a DTMP \((\tau _n,\xi _n)_{n\in \mathbb N}\) in *S* with transition probability kernel *Q* and a given starting point \((t,x)\in \mathbb R_{\ge 0}\times K\). One can then define a process *Z* in *K* setting \(Z_t = \sum _{n=0}^{N_{\mathfrak y}}\xi _n \mathbf 1_{[\tau _n,\tau _{n+1})}(t)\), where \(N_{\mathfrak y} := \sup \{n\in \mathbb N\, :\, \tau _n<\infty \}\). Then *Z* has the same law as the process *X* under \(\mathbb P^{t,x}\).

### *Example 5*

For a CTMC defined by its transition rate matrix \(\nu (x,{\{ x' \}})\), we get that \(\pi (x,\{y\})_{x,y\in K}\) is the stochastic transition matrix of the corresponding embedded discrete-time Markov chain (DTMC). \(\square \)

## 3 Bounded-Time Safety Probability for cPJMPs

*X*, that is the quantity

^{3}Note that in this setup we must account for the initial time \(t_0\) alongside the initial state \(x_0\) because the process is time-inhomogeneous.

*Safe, Unsafe*, and

*Wait*. The transition system is initialized at \(\mathbb {W}\) or \(\mathbb {U}\) depending on whether the initial state of the process is in the safe set or not. A transition from \(\mathbb {W}\) to \(\mathbb {S}\) is activated if the next jump time \(\tau ^+\) is outside the interval \(\mathcal T := [0,T]\) and the next state \(x^+\in K\). A transition \(\mathbb {W}\rightarrow \mathbb {U}\) is activated if \(\tau ^+\in \mathcal T\) and the next state is outside the safe set. Finally, the self loops at all the states of \(\mathcal Q\) characterize all other dynamics of the transition system.

The reformulation of \(p_{\mathcal A}(t_0,x_0,T)\) as (7) indicates its close relationship with the infinite-horizon probabilistic reach-avoid specification over DTMPs. This problem is studied in [32, 33], which formulate the solution as a Bellman equation and describe convergence properties of the series based on contractivity of the stochastic operator associated to the DTMP. The next theorem can be seen as an extension of [33, Section 3.1] and presents a Bellman equation for the characterization of the safety probability \(p_{\mathcal A}(t_0,x_0,T)\), which is an equation for the infinite-horizon reach-avoid problem over the DTMP \(\mathfrak M\) with the safe set \((\mathcal G\cup \mathcal H)\in \mathcal S\) and target set \(\mathcal H\in \mathcal S\).

### **Theorem 2**

*n*.

### **Lemma 1**

For a given set \(\mathcal G = \mathcal T\times \mathcal A\), with \(\mathcal T = [0,T]\) and bounded safe set \(\mathcal A\), suppose there exists a finite constant \(\kappa \ge \sup \left\{ \int _0^T\lambda (r,x)dr,\,x\in \mathcal A\right\} \). Then the invariance operator \(\mathcal I_{\mathcal G}\) is contractive with the norm \(\Vert \mathcal I_{\mathcal G}\Vert \le 1-e^{-\kappa }\).

### **Theorem 3**

The previous result allows us to select a sufficiently large *n* in order to make the difference between *V* and \(V_n\) smaller than a predefined threshold. For a given threshold, say \(\epsilon _1\), one can select \(N\ge \ln \epsilon _1/\ln \left( 1-e^{-\kappa }\right) \) and compute \(V_N\). Theorem 3 then guarantees that \(|V(s)-V_N(s)|\le \epsilon _1\) for all \(s\in S\). The next section is devoted to the precise computation of \(V_N\) over the uncountable state space *S*, for a preselected *N*.

## 4 Finite DTMCs as Formal Approximations of cPJMPs

In the previous sections we have shown that the bounded-time safety verification of the given cPJMP can be approximated by a step-bounded reach-avoid verification of a DTMP, with guaranteed error bounds. Due to lack of analytical solutions, the verification of DTMPs against PCTL specifications (amongst which reach-avoid) is studied in the literature via finite abstractions [1, 16], which result in the PCTL verification of discrete time, finite space Markov chains (DTMCs) [17, 18]. In other words, the goal of the DTMC abstraction is to provide a discrete and automated computation of the reach-avoid probability. The approach is formal in that it allows for the computation of explicit bounds on the error associated with the abstraction.

We now introduce some regularity assumptions on the jump rate function \(\lambda (\cdot )\) (Assumption 1) and on the jump measure \(\pi (\cdot )\) (Assumption 2), which are needed to quantify the abstraction error resulting from the DTMC \((S_a, P_a)\).

### **Assumption 1**

*K*is endowed with a metric \(\rho :K\times K\rightarrow \mathbb R\). Suppose the jump rate function \(\lambda (\cdot )\) is bounded and Lipschitz-continuous, namely that there are finite constants \(\varLambda \) and \(h_\lambda \) such that \(\lambda (t,x)\le \varLambda \) and

Assumption 1 implies the Lipschitz continuity of \(g(\cdot )\).

### **Lemma 2**

The next assumption is on the regularity of the jump measure \(\pi (\cdot )\) through its associated density function.

### **Assumption 2**

*K*. Assume that the jump measure \(\pi \) on \((K,\mathcal K)\) given \((\mathbb R_{\ge 0}\times K,\mathcal B(\mathbb R_{\ge 0})\otimes \mathcal K)\) is an integral kernel, i.e. that there exists a sigma-finite basis measure \(\mu \) on \((K,\mathcal K)\) and a jointly measurable function \(p:\mathbb R_{\ge 0}\times K\times K\rightarrow \mathbb R_{\ge 0}\) such that \(\pi (t,x,dy) = p(t,x,y)\mu (dy)\), i.e. \(\pi (t,x,A) = \int _A p(t,x,y)\mu (dy)\) for any \((t,x)\in \mathbb R_{\ge 0}\times K, A\in \mathcal K\). Suppose further that the density function \(p(\tau ,x,y)\) is Lipschitz-continuous, namely that there exists a finite constant \(h_p\), such that

### *Example 6*

The density function *p*(*t*, *x*, *y*) is computable for the dynamical system representation (5) in Example 4 under suitable assumptions on vector field *f* given the density function of \(\zeta (\cdot )\) [19, 21]. \(\square \)

### *Remark 1*

Using Assumptions 1 and 2, and its consequences Theorem 3 and Lemmas 1, 2, we finally establish the following result for the error computation of the abstraction.

### **Theorem 4**

*N*.

Notice that there are two terms contributing to the error in Theorem 4. The first term is caused by replacing the discrete infinite-step reach-avoid problem with an *N*-step one. The second term results from the DTMC abstraction. Augmenting the number of steps *N* decreases the first term exponentially and increases the second term linearly: as such, this upper bound on the error can be tuned by selecting a sufficiently large step-horizon *N*, and accordingly small partition diameters \(\delta _t,\delta _x\).

## 5 Case Study: Thermostatically Controlled Loads

*Thermostatically Controlled Loads* (TCLs) have shown potential to be engaged in power system services such as load shifting, peak shaving, and demand response programs. Recent studies have focused on the development of models for aggregated populations of TCLs [12, 20, 28]. Formal abstraction techniques have also been employed to verify properties of TCL models [2, 20]. We employ the model of a TCL as the case study in this paper. The model describes the continuous-time evolution of the temperature in a TCL by a linear SDE. The value of the temperature is available to a thermostat for regulation via a network of independent asynchronous sensors [3, 29]. We recast this model as a cPJMP and quantitatively verify user comfort as a probabilistic safety problem.

**Dynamical Model for the Case Study.**The continuous-time evolution of the temperature \(\theta = \{\theta _t,\,\, t\in \mathbb R_{\ge 0}\}\) in a

*cooling*TCL can be specified by the following linear SDE:

*C*and

*R*indicate the thermal capacitance and resistance, \(P_{rate}\) is the rate of energy transfer, and \(\sigma \) is standard deviation of the noise term. The process \(\{q_t,\,\,t\in \mathbb R_{\ge 0}\}\) represents the state of the thermostat at time

*t*, \(q_t\in \{0,1\}\) for OFF and ON modes (the latter meaning that the cooler is functioning), respectively. For a given temperature \(\theta _{t}\) at time

*t*and a fixed mode \(q_{t}\), the temperature at time \(s\ge t\) is characterized by the solution of (11), namely

We assume the value of temperature is available to the thermostat via a network of sensors at possibly non-uniform time samples \(\{\tau _n,n\in \mathbb N\}\). For a network of independent and asynchronous sensors, the time between two consecutive available values of temperature \((\tau _{n+1}-\tau _n)\), when the number of sensors is large, can be approximated by an exponential distribution [3, 29]. We assume that the associated rate depends on temperature, \(\lambda (\theta _{\tau _n})\), where \(\theta _{\tau _n}\) is the latest available temperature (at time \(\tau _n\)).

**cPJMP for the Case Study.**The values of temperature and the mode of the thermostat evolve over the

*hybrid*state space \(K = \{0,1\}\times \mathbb R\), namely a space made up of discrete

*and*continuous components [2]. The temperature space \(\mathbb R\) is endowed with the Euclidean metric and with the Borel sigma-algebra. The jump measure of the process is an integral kernel (Assumption 2 is valid), with \(\mu \) being the Lebesgue measure and with the density function

*T*] is greater than a given threshold. This problem can be mathematically formulated as computing the safety probability of the model over the safe set \(\mathcal A = \{0,1\}\times [\theta _-,\theta _+]\).

Note that the density function \(\pi (\cdot )\) is slightly different from the general formulation of cPJMPs in Sect. 2 in that it depends on \((\tau -t)\) (through \(a(\cdot )\)), instead of just the jump time \(\tau \). This difference requires a slight modification of the abstraction error, which is presented next.

**Computation of Probabilistic Safety.**We consider a jump rate function \(\lambda (t,\theta ) = \lambda _0e^{-\alpha t}\cosh [2\beta (\theta -\theta _{\mathfrak s})]\) with positive constants \(\lambda _0,\alpha ,\) and \(\beta \). The term \(e^{-\alpha t}\) models the reduction of the sampling rate of the sensors in time. The cosine hyperbolic function \(\cosh [2\beta (\theta -\theta _{\mathfrak s})]\) shows that more frequent temperature measurements are provided by the sensors for larger deviation of the temperature from the set-point. The assumption raised on the jump rate function in Lemma 1 holds with constant \(\kappa = \lambda _0\cosh (\beta \delta _{\mathfrak d})/\alpha \), whereas Assumption 1 holds with \(h_\lambda = 2\lambda _0\beta \sinh (\beta \delta _{\mathfrak d})\) and \(\varLambda =\lambda _0\cosh (\beta \delta _{\mathfrak d})\). The application of the abstraction technique presented in this paper to the case study leads to the error

*t*. We use the values in Fig. 2 (left) for the parameters in the numerical simulation. The standard deviation of the process noise is \(\sigma = 0.1\,[^\circ C s^{-1/2}]\). The time bound for the safety specification is \(T = 1 [h]\). The parameters of the jump rate functions are \(\alpha =1,\beta =1,\lambda _0 =1\), which means if the TCL is initialized at the set-point, the rate of temperature observations is 20 times higher than the decay rate of the TCL (1 /

*RC*).

We have implemented Algorithm 1 for the abstraction and computation of safety probability over the model using the software tool FAUST \(^{\mathsf 2}\) [22]. Figure 2 (right) shows the error bound from (13), as a function of numbers of partition bins for the temperature \(n_\theta \) and the time \(n_t\), with a fixed step-horizon \(N=8\). One can see that for instance the abstraction algorithm guarantees an error bound of 0.23 by selecting \(n_\theta =n_t=4\times 10^3\) (\(\delta _\theta = 1.25\times 10^{-4}, \delta _t = 2.5\times 10^{-4}\)), which generates a DTMC with \(3.2\times 10^7\) states. This indicates that meaningful error bounds (less than one) may lead to large DTMCs.

## 6 Conclusions

We have presented an abstraction-based safety verification procedure for pure jump Markov processes with continuous states. While the focus of the work has been on the study of probabilistic safety, the technique can be extended to verify richer temporal properties. The errors can be sharpened via adaptive, non-uniform schemes [18, 19]. cPJMP are a generalization of CTMC with an assumption of constant values in between jumps: we plan to investigate the challenging problem of non-constant dynamics between jumps [15].

## Footnotes

- 1.
Recall that a (homogeneous) Poisson process \({\{ \mathcal N_t \mid t \ge 0 \}}\) with rate \(\lambda \) is a Lévy process with \(\mathcal N_0 = 0\) and \(\mathbb P\{\mathcal N_t = n\} = \frac{(\lambda t)^n}{n!} e^{-\lambda t}\).

- 2.
The solution can be derived observing that the process satisfies the recursive equation \(X_{\tau _{n+1}}-X_{\tau _n} = c_0 X_{\tau _n}\), where the jumps occur at \(\tau _n,\,\, n=1,2,3,\ldots \) according to \(\mathcal N_t\).

- 3.
A slight modification of the approach presented in this paper allows for verifying more general quantitative questions such as \(\mathbb P_{\sim p}(\varPhi \,\mathsf U^{(T_1,T_2)}\varPsi )\), defined over any state labels \(\varPsi ,\varPhi \) and over any (possibly unbounded) time interval \((T_1,T_2)\) – an adaptation is required on the construction of sets \(\mathcal G,\mathcal H\) in (7).

## References

- 1.Abate, A., Katoen, J.-P., Lygeros, J., Prandini, M.: Approximate model checking of stochastic hybrid systems. Eur. J. Control
**6**, 624–641 (2010)MathSciNetCrossRefzbMATHGoogle Scholar - 2.Abate, A., Prandini, M., Lygeros, J., Sastry, S.: Probabilistic reachability and safety for controlled discrete time stochastic hybrid systems. Automatica
**44**(11), 2724–2734 (2008)MathSciNetCrossRefzbMATHGoogle Scholar - 3.Aït-Sahalia, Y., Mykland, P.A.: The effects of random and discrete sampling when estimating continuous-time diffusions. Econometrica
**71**(2), 483–549 (2003)MathSciNetCrossRefzbMATHGoogle Scholar - 4.Allen, L.J.S.: An Introduction to Stochastic Processes with Applications to Biology. Pearson/Prentice Hall, Englewood Cliffs (2003)zbMATHGoogle Scholar
- 5.Aziz, A., Sanwal, K., Singhal, V., Brayton, R.: Verifying continuous time Markov chains. In: Alur, R., Henzinger, T.A. (eds.) CAV 1996. LNCS, vol. 1102, pp. 269–276. Springer, Heidelberg (1996)CrossRefGoogle Scholar
- 6.Baier, C., Haverkort, B., Hermanns, H., Katoen, J.-P.: Model checking continuous-time Markov chains by transient analysis. In: Allen Emerson, E., Prasad Sistla, A. (eds.) CAV 2000. LNCS, vol. 1855, pp. 358–372. Springer, Heidelberg (2000)CrossRefGoogle Scholar
- 7.Baier, C., Haverkort, B., Hermanns, H., Katoen, J.-P.: Model-checking algorithms for continuous-time Markov chains. IEEE Trans. Softw. Eng.
**29**(6), 524–541 (2003)CrossRefzbMATHGoogle Scholar - 8.Baier, C., Katoen, J.-P.: Principles of Model Checking. MIT Press, Cambridge (2008)zbMATHGoogle Scholar
- 9.Baier, C., Katoen, J.-P., Hermanns, H.: Approximate symbolic model checking of continuous-time Markov chains (extended abstract). In: Baeten, J.C.M., Mauw, S. (eds.) CONCUR 1999. LNCS, vol. 1664, pp. 146–161. Springer, Heidelberg (1999)CrossRefGoogle Scholar
- 10.Bertsekas, D.P., Shreve, S.E.: Stchastic Optimal Control: the Discrete-Time Case. Athena Scientific, Belmont (1996)Google Scholar
- 11.Billingsley, P.: Convergence of Probability Measures. Wiley, New York (1999)CrossRefzbMATHGoogle Scholar
- 12.Callaway, D.S.: Tapping the energy storage potential in electric loads to deliver load following and regulation, with application to wind energy. Ener. Convers. Manag.
**50**(5), 1389–1400 (2009)CrossRefGoogle Scholar - 13.Chen, T., Diciolla, M., Kwiatkowska, M., Mereacre, A.: Time-bounded verification of CTMCs against real-time specifications. In: Fahrenberg, U., Tripakis, S. (eds.) FORMATS 2011. LNCS, vol. 6919, pp. 26–42. Springer, Heidelberg (2011)CrossRefGoogle Scholar
- 14.Chen, T., Han, T., Katoen, J.-P., Mereacre, A.: Model checking of continuous-time Markov chains against timed automata specifications. Logical Methods Comput. Sci.
**7**(1), 1–34 (2011)MathSciNetCrossRefzbMATHGoogle Scholar - 15.Davis, M.H.A.: Markov Models and Optimization. Chapman & Hall/CRC Press, London (1993)CrossRefzbMATHGoogle Scholar
- 16.Esmaeil Zadeh Soudjani, S., Abate, A.: Adaptive gridding for abstraction and verification of stochastic hybrid systems. In: QEST, pp. 59–69 (2011)Google Scholar
- 17.Esmaeil Zadeh Soudjani, S., Abate, A.: Higher-order approximations for verification of stochastic hybrid systems. In: Chakraborty, S., Mukund, M. (eds.) ATVA 2012. LNCS, vol. 7561, pp. 416–434. Springer, Heidelberg (2012)CrossRefGoogle Scholar
- 18.Esmaeil Zadeh Soudjani, S., Abate, A.: Adaptive and sequential gridding procedures for the abstraction and verification of stochastic processes. SIAM J. Appl. Dyn. Syst.
**12**(2), 921–956 (2013)MathSciNetCrossRefzbMATHGoogle Scholar - 19.Esmaeil Zadeh Soudjani, S., Abate, A.: Precise approximations of the probability distribution of a Markov process in time: an application to probabilistic invariance. In: Ábrahám, E., Havelund, K. (eds.) TACAS 2014 (ETAPS). LNCS, vol. 8413, pp. 547–561. Springer, Heidelberg (2014)CrossRefGoogle Scholar
- 20.Esmaeil Zadeh Soudjani, S., Abate, A.: Aggregation and control of populations of thermostatically controlled loads by formal abstractions. IEEE Trans. Control Syst. Technol.
**23**(3), 975–990 (2015)MathSciNetCrossRefzbMATHGoogle Scholar - 21.Soudjani, S.E.Z., Abate, A.: Quantitative approximation of the probability distribution of a Markov process by formal abstractions. Logical Methods Comput. Sci
**11**(3), 1–29 (2015)MathSciNetzbMATHGoogle Scholar - 22.Esmaeil Zadeh Soudjani, S., Gevaerts, C., Abate, A.: \({\sf FAUST^2}\): formal abstractions of uncountable-state stochastic processes. In: Baier, C., Tineli, C. (eds.) TACAS 2015. LNCS, vol. 9035, pp. 272–286. Springer, Heidelberg (2015)Google Scholar
- 23.Gihman, I.I., Skorokhod, A.V.: The Theory of Stochastic Processes: II. Die Grundlehren der mathematischen Wissenschaften, vol. 218. Springer, Heidelberg (1975)CrossRefGoogle Scholar
- 24.Hinton, A., Kwiatkowska, M., Norman, G., Parker, D.: PRISM: a tool for automatic verification of probabilistic systems. In: Hermanns, H., Palsberg, J. (eds.) TACAS 2006. LNCS, vol. 3920, pp. 441–444. Springer, Heidelberg (2006)CrossRefGoogle Scholar
- 25.Jacod, J., Shiryaev, A.: Limit Theorems for Stochastic Processes. Grundlehren der mathematischen Wissenschaften. Springer, Heidelberg (2010)zbMATHGoogle Scholar
- 26.Kallenberg, O.: Foundations of Modern Probability. Probability and its Applications. Springer, New York (2002)CrossRefzbMATHGoogle Scholar
- 27.Katoen, J.-P., Khattri, M., Zapreev, I.S.: A Markov reward model checker. In: QEST, pp. 243–244. IEEE (2005)Google Scholar
- 28.Mathieu, J.L., Koch, S., Callaway, D.S.: State estimation and control of electric loads to manage real-time energy imbalance. IEEE Trans. Power Syst.
**28**(1), 430–440 (2013)CrossRefGoogle Scholar - 29.Micheli, M., Jordan, M.: Random sampling of a continuous-time stochastic dynamical system. In: Proceedings of the 15th International Symposium on the Mathematical Theory of Networks and Systems (MTNS), pp. 1–15 (2002)Google Scholar
- 30.Panangaden, P.: Labelled Markov Processes. Imperial College Press, London (2009)CrossRefzbMATHGoogle Scholar
- 31.Platen, E., Bruti-Liberati, N.: Numerical Solution of Stochastic Differential Equations with Jumps in Finance. Stochastic Modelling and Applied Probability. Springer, Heidelberg (2010)CrossRefzbMATHGoogle Scholar
- 32.Tkachev, I., Abate, A.: On infinite-horizon probabilistic properties and stochastic bisimulation functions. In: Proceedings of the 50th IEEE Conference on Decision and Control and European Control Conference, pp. 526–531, Orlando, FL, December 2011Google Scholar
- 33.Tkachev, I., Abate, A.: Characterization and computation of infinite-horizon specifications over Markov processes. Theor. Comput. Sci.
**515**, 1–18 (2014)MathSciNetCrossRefzbMATHGoogle Scholar - 34.Van Kampen, N.G.: Stochastic Processes in Physics and Chemistry. North-Holland Personal Library. Elsevier Science, Amsterdam (2011)zbMATHGoogle Scholar
- 35.Ziai, Y.: Statistical models of claim amount distributions in general insurance. Ph.D. thesis, School of Engineering and Maths, City University London (1979)Google Scholar