Abstract
We study the probabilistic safety verification problem for pure jump Markov processes, a class of models that generalizes continuous-time Markov chains over continuous (uncountable) state spaces. Solutions of these processes are piecewise constant, right-continuous functions from time to states. Their jump (or reset) times are realizations of a Poisson process, characterized by a jump rate function that can be both time- and state-dependent. Upon jumping in time, the new state of the solution process is specified according to a (continuous) stochastic conditional kernel. After providing a full characterization of safety properties of these processes, we describe a formal method to abstract the process as a finite-state discrete-time Markov chain; this approach is formal in that it provides a-priori error bounds on the precision of the abstraction, based on the continuity properties of the stochastic kernel of the process and of its jump rate function. We illustrate the approach on a case study of thermostatically controlled loads.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
1 Introduction
Stochastic processes evolving in continuous time are used to model many phenomena in science and engineering. In recent years, there has been a lot of work in the algorithmic analysis and formal verification of such models with respect to quantitative temporal specifications. For example, the verification of continuous-time Markov chains over finite state spaces has been widely addressed in the literature against properties expressed in temporal logics such as CSL [5–7], MTL [13], and timed-automata specifications [14], and there exist efficient software tools [24, 27].
In this paper, we extend this line of work and study the class of continuous-space, pure jump Markov processes (cPJMP, for short). A cPJMP evolves in continuous time. The process starts at state \(X_{t_0} = x_0\) at time \(t = t_0\) and waits until a random time \(t = T_1\), governed by a Poisson process depending on \(x_0\) and possibly time-inhomogeneous, when it makes a jump to a new state \(X_{T_1} = x_1\) based on a transition kernel that is conditional on the jumping time and on \(x_0\). Then it waits until time \(t= T_2\), when it makes another jump to state \(X_{T_2} = x_2\) with probability that depends on the current time and on \(x_1\), and so on. The states take values over a continuous domain, hence the transition kernel induces continuous measures.
cPJMPs generalize continuous-time, finite-state Markov chains (CTMCs) by allowing time-inhomogeneous behaviors (the waiting times and transition probabilities can depend on time) and allowing for general, continuous state spaces. Correspondingly, non-deterministic extensions of cPJMPs (not explicitly discussed in this work, but directly obtainable from cPJMPs) extend general-space MDPs [10] and LMPs [30] by allowing a random inter-arrival time in between stochastic resets over their continuous state space. cPJMPs can be employed in the definition and analysis of jump-diffusion processes [25]: of interest to this work, the jump component can capture event-driven uncertainties, such as corporate defaults, operational failures, or insured events [31]. It is likewise possible to obtain a cPJMP by random time sampling of a general stochastic differential equation (SDE) – indeed cPJMPs can be as well thought of as SDEs with jumps, with drift and diffusion terms that are equal to zero. This connection with diffusions driven by Wiener processes renders cPJMP relevant to areas including financial and economic modeling [31], systems biology [4], physics and chemistry [34].
We study the problem of approximately computing the bounded-time safety probability of a cPJMP by generalizing the corresponding algorithms for CTMCs. First, we show that a cPJMP can be embedded into a discrete-time, continuous-space Markov process (DTMP). In this process, we “compile away” the time inhomogeneity of the process by explicitly modeling the time as an additional state variable. Second, we characterize the bounded-time safety probability of the discrete-time Markov process as the least fixed point solution of a system of integral equations that generalize the Bellman equations for CTMCs. Finally, under Lipschitz continuity assumptions on the jump rate function and on the jump measure of the cPJMP, we show how the continuous-space discrete-time Markov process can be approximated by a finite-state discrete-time Markov chain (DTMC), up to any desired degree of precision. Our technical result shows a guaranteed upper bound on the error incurred in computing the bounded-time safety probability introduced by the finite-state approximation.
While we focus on bounded-time safety probability computation, our algorithms can be generalized to provide approximate model checking algorithms for more expressive temporal logics such as continuous-time stochastic logic (CSL) [5, 9]. We demonstrate our results on a case study from energy systems, modeling thermostatically-controlled loads as a cPJMP.
2 Pure Jump Markov Processes in Continuous Time
2.1 Model Definition - Syntax and Semantics
Let \((K,\mathcal K)\) be a measurable space, where K is the (not necessarily finite) state space and \(\mathcal K\) is a sigma-algebra on K. Let \(\varOmega \) be a sample space. Let \({\mathbb R}_{\ge 0}\) be the set of non-negative reals. We consider stochastic processes \(X:\varOmega \times {\mathbb R}_{\ge 0}\rightarrow K\) in continuous time. For any \(t\in {\mathbb R}_{\ge 0}\), the function \(X(\cdot ,t):\varOmega \rightarrow K\) is a random variable, which we denote by \(X_t\). For every \(I\subseteq \mathbb R_{\ge 0}\) we write \(\mathcal F_I = \sigma (X_t,t\in I)\) for the sigma-algebra on \(\varOmega \) generated by the stochastic process X restricted to the index set I. We suppose that for every \(t\in \mathbb R_{\ge 0}\) and \(x\in K\), a probability \(\mathbb P^{t,x}\) is given on \((\varOmega ,\mathcal F_{[t,\infty )})\). The stochastic process \(X: \varOmega \times {\mathbb R}_{\ge 0} \rightarrow K\) is a (pure) jump Markov process if the following conditions hold:
-
(a)
\(\mathcal K\) contains all one-point sets and \(\mathbb P^{t,x}(X_t = x) = 1\) for every \(t\in \mathbb R_{\ge 0}\), \(x\in K\).
-
(b)
For every \(0\le t\le s\) and \(A\in \mathcal K\) the function \(x\mapsto \mathbb P^{t,x}(X_s\in A)\) is \(\mathcal K\)-measurable.
-
(c)
[Markov property] For every \(0\le u \le t\le s\), \(A\in \mathcal K\) we have \(\mathbb P^{u,x}(X_s\in A|\mathcal F_{[u,t]}) = \mathbb P^{t,X_t}(X_s\in A)\), \(\mathbb P^{u,x}\)-a.s.
-
(d)
[Pure Jump property] For every \(\omega \in \varOmega \) and \(t\ge 0\) there exists \(\delta >0\) such that \(X_s(\omega ) = X_t(\omega )\) for \(s\in [t,t+\delta ]\); this is equivalent to requiring that all the trajectories of X are càdlàg [11] when K is given the discrete topology (where all subsets are open).
-
(e)
[Non-explosive property] For every \(\omega \in \varOmega \) the number of jumps of the trajectory \(t\mapsto X_t(\omega )\) is finite on every bounded interval.
Condition (a) enables us to assign probabilities to any points \(x\in K\). In particular, the probability measure \(\mathbb P^{t,x}\) assigns probability 1 to x, so that the process is initialized deterministically at x at time t. Condition (b) is essential for transporting any probability measure on \(X_t\) to the events \(X_s\in A,\,\,A\in \mathcal K\), for any \(t\le s\).
Intuitively, a Markov process \(X: \varOmega \times {\mathbb R}_{\ge 0} \rightarrow K\) in continuous time is a pure jump process if, starting from any point \(x\in K\), the process is right continuous, admits constant trajectories except at isolated jumps, and allows only for a finite number of isolated jumps within any bounded interval. A cPJMP is described by means of the joint law Q of the first jump time \(T_1\) and of the corresponding position \(X_{T_1}\). To proceed formally, we first fix \(t\ge 0\) and \(x\in K\) and define the first jump time
with the convention that \(T_1(\omega ) = \infty \) if the indicated set is empty. Clearly, the value of \(T_1\) depends on t. Its associated probability measure also depends on x through \(\mathbb P^{t,x}\). Allowing this jump time to be equal to infinity requires extending the definition of the process X as follows. Take an extra dummy point \(\varDelta \notin K\) and redefine \(X:\varOmega \times {\mathbb R}_{\ge 0}\cup \{\infty \}\rightarrow K\cup \{\varDelta \}\) such that \(X(\omega ,\infty ) = X_\infty (\omega ) = \varDelta \) for all \(\omega \in \varOmega \). Then \(X_{T_1}:\varOmega \rightarrow K\cup \{\varDelta \}\) is well defined. Note that \(X_{T_1}\) is associated with a probability measure first through the random variable \(T_1\) (the first jump time) and then through the process X conditioned on knowing this jump time.
On the extended space \(S:=\left( \mathbb R_{\ge 0}\times K\right) \cup \{(\infty ,\varDelta )\}\) we consider the smallest sigma-algebra, denoted by \(\mathcal S\), containing \(\{(\infty ,\varDelta )\}\) and all sets of \(\mathcal B(\mathbb R_{\ge 0})\otimes \mathcal K\) (here and in the following \(\mathcal B(\varLambda )\) denotes the Borel sigma-algebra of a topological space \(\varLambda \), and \(Y\otimes Z\) is the product sigma-algebra of two sigma-algebras Y, Z, that is the smallest sigma-algebra generated by subsets of the form \(A_1\times A_2\), \(A_1\in Y,A_2\in Z\)). Note that this sigma-algebra \(\mathcal S\) is smaller than the product of two sigma-algebras defined on \(\mathbb R_{\ge 0}\cup \{\infty \}\) and \(K\cup \{\varDelta \}\). The extended process X ensures that \(\mathcal S\) is sufficient to contain the associated probability measure of \((T_1,X_{T_1})\). With these definitions, \((T_1,X_{T_1})\) is a random variable with values in \((S,\mathcal S)\), and its law under \(\mathbb P^{t,x}\) is denoted by \(Q(t,x,\cdot )\).
We first construct \(Q(t,x,\cdot )\) for the continuous part of \(\mathcal S\) and later discuss how to assign probabilities to the single point \((\infty ,\varDelta )\). We will assume that Q is constructed starting from a given transition measure from \(\mathbb R_{\ge 0}\times K\) to \(\mathcal K\), called rate measure and denoted by \(\nu (t,x,A),t\in \mathbb R_{\ge 0},x\in K,A\in \mathcal K\). We require that \(A\mapsto \nu (t,x,A)\) is a positive measure on \(\mathcal K\) for all \(t\in \mathbb R_{\ge 0}\) and \(x\in K\), and that \((t,x)\mapsto \nu (t,x,A)\) is \(\mathcal B(\mathbb R_{\ge 0})\otimes \mathcal K-\)measurable for all \(A\in \mathcal K\). We also assume that the rate measure \(\nu \) satisfies the two conditions
-
(f)
\(\sup \{\nu (t,x,K)|t\in \mathbb R_{\ge 0},x\in K\}<\infty \) and
-
(g)
\( \nu (t,x,\{x\}) = 0\) for all \(t\in \mathbb R_{\ge 0},x\in K.\)
The condition (f) implies a finite number of jumps in a bounded interval, which satisfies the non-explosive condition (e) raised above. The condition (g) enforces no jump from a state to itself, which is in accordance with the definition of jump time in (1). Define
where \(\mathbf 1_A(\cdot )\) is the indicator function of any set A. Therefore \(\lambda \) is a nonnegative bounded measurable function and \(\pi \) is a transition probability on K satisfying
where \(\varvec{\delta }_x\) is the Dirac measure at x. Function \(\lambda \) is called the jump rate function, and \(\pi \) the jump measure. Note that we have \(\nu (t,x,A) = \lambda (t,x)\pi (t,x,A), \forall t\in \mathbb R_{\ge 0},x\in K,A\in \mathcal K\). Given the rate measure \(\nu \), we require that for the Markov process X we have, for \(0\le t\le a<b\le \infty , x\in K, A\in \mathcal K\),
where Q was described above as the law of \((T_1,X_{T_1})\) under \(\mathbb P^{t,x}\). Note that (2) completely specifies the probability measure \(Q(t,x,\cdot )\) on \((S,\mathcal S)\): indeed simple computations show that
for all \(s\ge t\) and we clearly have \(\mathbb P^{t,x}(T_1 \le t) = Q(t,x,[0,t]\times K) = 0.\) Note that (3) assigns probability to the single point \((\infty ,\varDelta )\), which completes the definition of \(Q(t,x,\cdot )\) on \((S,\mathcal S)\).
We may interpret (4) as the statement that \(T_1\) has exponential distribution on \([t,\infty ]\) with variable rate \(\lambda (r,x), r \ge t\). Moreover, the probability \(\pi (s,x,A)\) can be interpreted as the conditional probability that \(X_{T_1}\) is in \(A\in \mathcal K\), given that the jump time \(T_1 = s\), or more precisely,
2.2 Examples and Related Models
Example 1
Poisson-driven differential equation [31, Section 1.7]. Let the process \({\{ \mathcal N_t\mid t\ge 0 \}}\) represent a standard Poisson process with homogeneous rate \(\lambda \).Footnote 1 Consider a pure jump process \(X = \{X_t,\,\, t\in \mathbb R_{\ge 0}, X_t \in \mathbb R \}\), driven by the Poisson process \(\mathcal N_t\), where its value \(X_t\) at time t satisfies the SDE
with the deterministic initial value \(X_0\in \mathbb R\). The function \(c:\mathbb R_{\ge 0}\times \mathbb R\rightarrow \mathbb R\) is called the jump coefficient. For the case of \(c(t,x) = c_0 x\) with the constant \(c_0\in \mathbb R_{\ge 0}\) and initial value \(X_0>0\), the process has an explicit representationFootnote 2
From this explicit representation, we can compute properties of the process \(X_t\), such as the probability that the process does not exceed \(\alpha _{\mathfrak h}>0\) within the time interval [0, T]. This probability is analytically computable for the above simple process: defining \(\beta _{\mathfrak h} = \left( \ln \alpha _{\mathfrak h}-\ln X_0\right) /\ln (c_0+1)\), this probability is \(\sum _{n=0}^{n\le \beta _{\mathfrak h}}e^{-\lambda T}(\lambda T)^n/n!\). \(\square \)
Example 2
Compound Poisson processes [31, Section 1.1] represent a generalization of Poisson processes, with exponential waiting times between jumps but where jump sizes, rather than being deterministic, follow an arbitrary distribution. Let \(\{y_n\}_{n\ge 1}\) be a sequence of independent random variables with distribution \(\mu \) for all \(n\ge 1\) and assume that the standard Poisson process \({\{ \mathcal N_t\mid t\ge 0 \}}\) with parameter \(\lambda >0\) is independent of \(\{y_n\}_{n\ge 1}\). The compound Poisson process \(X_t\) is represented in the form \(X_t = \sum _{n=1}^{\mathcal N_t} y_n\). A typical application of compound Poisson processes is to model the aggregate claim up to time t generated by a portfolio of insurance policies, where the individual claims are distributed according to \(\mu \). Let us assume the gamma distribution \(y_n\sim \varGamma (a,b)\) for the individual claims [35] and answer the same safety question as in the previous example: what is the probability that the aggregate claim does not exceed \(\alpha _{\mathfrak h}>0\) in the time interval [0, T]? This probability is also analytically computable, and results in
where \(\varGamma (\cdot )\) is the gamma function, and \(\gamma (\cdot ,\cdot )\) is the lower incomplete gamma function. \(\square \)
Notice that the safety probability is expressible analytically in the above two examples. This is first because the trajectories of the solution are always non-decreasing, and secondly since the distribution of the solution process conditioned on the value of the underlying Poisson process is computable analytically. Unfortunately in general trajectories of cPJMPs cannot be derived explicitly, and as such the safety probability is not analytically expressible. In Sect. 3 we provide a general characterization of the solution of the probabilistic safety problem. In Sect. 4 we also work out a formal approximation method to numerically compute the solution.
Example 3
Continuous-time Markov chains [8]. The class of cPJMP we consider includes, as special cases, all the time-homogeneous, nonexplosive, jump Markov processes: these correspond to a function \(\nu \) not depending on the time variable t. Within this time-homogeneous case we need to retain the boundedness assumption in (f) for the rate function. Assuming further that K is a finite or countably infinite set, we obtain the class of continuous-time Markov chains characterized by the transition rates matrix \(\nu (x,\{y\})_{x,y\in K}\), namely
where \(E(x) = \sum _{x'\in K} \nu (x,{\{ x' \}})\). The probability that the system stays within a set \(A\subseteq K\) in the interval [0, T] can be expressed as the solution of a system of integral equations [7], which is a special case of the Bellman fixed-point equation developed in Sect. 3 for cPJMPs, but not expressible in closed form. \(\square \)
Example 4
cPJMP defined by dynamical systems. Consider a process X with piecewise-constant trajectories, which resets (or jumps) at time t over a space K according to a vector field \(f:K\times \mathbb R^n\times \mathbb R_{\ge 0}\rightarrow K\), so that
where \(\zeta (\cdot )\) is a continuous-time stationary process with a given, time-independent density function. The resets for the process follow a Poisson process \(\mathcal N_t, t\ge 0, \mathcal N_0 = 0\), with a rate \(\lambda \) depending on time t and on the continuous state of the process x(t). Notice that the dependence of the vector field f on time is in accordance with [25]. The map f, together with the distribution of the process \(\zeta (\cdot )\), uniquely defines a jump measure \(\pi (t,x,A)\), which gives the probability of jumping from any state x at time t to any (measurable) subset of the state space \(A\subseteq K\) [26, Proposition 7.6]:
where \(T_{\zeta }\) is the distribution of the random vector \(\zeta (0)\) (in fact, of any \(\zeta (t)\) since the process is stationary and time-independent). \(\square \)
2.3 Embedded Discrete-Time Markov Process of a cPJMP
We have defined a cPJMP on a measurable space \((K,\mathcal K)\) through the transition measure \(\nu \). The trajectories of a cPJMP are piecewise constant, which makes them fully representable by their jump times and corresponding values. It is worth studying the properties of the random variables \((T_n(w),X_{T_n}(w))\), \(n\in \mathbb N:= \{0,1,2,\ldots \}\), where \(T_n\) is the \(n^{\text {th}}\) jump time and \(X_{T_n}\) is the corresponding value of the process. The ensuing Theorem 1 states that \((T_n,X_{T_n})_{n\in \mathbb N}\) can be considered as a discrete-time Markov process (DTMP) by slight extension of the definition of Q. The discrete time is indexed by nonnegative natural numbers \(n\in \mathbb N\), as opposed to continuous time indexed by \(t\in \mathbb R_{\ge 0}\).
Definition 1
A discrete-time Markov process \((Y_n)_{n\in \mathbb N}\) is uniquely defined by a triple \(\mathfrak D = (E_{\mathfrak y},\mathcal E_{\mathfrak y}, P_{\mathfrak y})\), where \((E_{\mathfrak y},\mathcal E_{\mathfrak y})\) is a measurable space and \(P_{\mathfrak y}:E_{\mathfrak y}\times \mathcal E_{\mathfrak y}\rightarrow [0,1]\) is a transition kernel such that for any \(y\in E_{\mathfrak y}\) and \(A\in \mathcal E_{\mathfrak y}\), \(P_{\mathfrak y}(y,A)\) gives the probability that \(Y_{n+1}\in A\), conditioned on \(Y_n = y\). \(E_{\mathfrak y}\) is called the state space of the DTMP \(\mathfrak D\) and the elements of \(E_{\mathfrak y}\) are the states of \(\mathfrak D\). The process is time-inhomogeneous if \(P_{\mathfrak y}\) depends also on the time index n.
We adapt the following result from [23, Chapter III, Section 1, Theorem 2].
Theorem 1
Starting from \(T_0 = t\) define inductively \(T_{n+1} = \inf \{s>T_n\,:\, X_s\ne X_{T_n}\}\), with the convention that \(T_{n+1} = \infty \) if the indicated set is empty. Under the probability \(\mathbb P^{t,x}\), the sequence \((T_n,X_{T_n})_{n\in \mathbb N}\) is a DTMP in \((S,\mathcal S)\) with transition kernel Q, provided we extend the definition of Q making the state \((\infty ,\varDelta )\) absorbing, by defining \(Q(\infty ,\varDelta ,\mathbb R_{\ge 0}\times K) = 0, Q(\infty ,\varDelta ,\{(\infty ,\varDelta )\}) = 1\).
Note that \((T_n,X_{T_n})_{n\in \mathbb N}\) is time-homogeneous, although in general X is not.
Theorem 1 states that given the stochastic process \(X:\varOmega \times \mathbb R_{\ge 0}\rightarrow K\) with probability measure \(\mathbb P^{t,x}\) as defined in Sect. 2, we can construct a DTMP on \((S,\mathcal S)\) with the extended transition kernel Q, whose state includes jump times and jump values of X. The inverse is also true, as described next, which allows for a simple description of the process X. Suppose one starts with a DTMP \((\tau _n,\xi _n)_{n\in \mathbb N}\) in S with transition probability kernel Q and a given starting point \((t,x)\in \mathbb R_{\ge 0}\times K\). One can then define a process Z in K setting \(Z_t = \sum _{n=0}^{N_{\mathfrak y}}\xi _n \mathbf 1_{[\tau _n,\tau _{n+1})}(t)\), where \(N_{\mathfrak y} := \sup \{n\in \mathbb N\, :\, \tau _n<\infty \}\). Then Z has the same law as the process X under \(\mathbb P^{t,x}\).
Example 5
For a CTMC defined by its transition rate matrix \(\nu (x,{\{ x' \}})\), we get that \(\pi (x,\{y\})_{x,y\in K}\) is the stochastic transition matrix of the corresponding embedded discrete-time Markov chain (DTMC). \(\square \)
3 Bounded-Time Safety Probability for cPJMPs
In this section, we characterize the bounded-time safety probability for a cPJMP X, that is the quantity
for a given initial time \(t_0\in [0,T]\), \(T<\infty \), and initial state \(X_{t_0} = x_0\).Footnote 3 Note that in this setup we must account for the initial time \(t_0\) alongside the initial state \(x_0\) because the process is time-inhomogeneous.
For the characterization of \(p_{\mathcal A}(t_0,x_0,T)\), we first construct the DTMP \(\mathfrak M = (S,\mathcal S,Q)\) with state \(s_n = (\tau _n,x_n)\in S\) according to Theorem 1. In order to formulate the safety problem over the new process \(\mathfrak M\), we introduce a transition system with a set of states \(\mathcal Q = \{\mathbb {S},\mathbb {U},\mathbb {W}\}\) representing Safe, Unsafe, and Wait. The transition system is initialized at \(\mathbb {W}\) or \(\mathbb {U}\) depending on whether the initial state of the process is in the safe set or not. A transition from \(\mathbb {W}\) to \(\mathbb {S}\) is activated if the next jump time \(\tau ^+\) is outside the interval \(\mathcal T := [0,T]\) and the next state \(x^+\in K\). A transition \(\mathbb {W}\rightarrow \mathbb {U}\) is activated if \(\tau ^+\in \mathcal T\) and the next state is outside the safe set. Finally, the self loops at all the states of \(\mathcal Q\) characterize all other dynamics of the transition system.
Based on the transition system in Fig. 1, the quantity \(p_{\mathcal A}(t_0,x_0,T)\) can be characterized as the probability of reaching the state \(\mathbb {S}\) in the transition system under the dynamics of \(\mathfrak M\), which is equal to the likelihood associated to the set of words \(\{ \mathbb {W} ^{+}\mathbb {S} \} = \{\mathbb {W}\mathbb {S},\mathbb {W}^2\mathbb {S},\mathbb {W}^3\mathbb {S},\ldots \}\) (we have denoted by \(\mathbb {W} ^{+}\) the Kleene star without \(\epsilon \)). This can be written as the infinite series \(p_{\mathcal A}(t_0,x_0,T) = \sum _{n=1}^\infty \mathbb P\left\{ \mathbb {W}^n\mathbb {S}\right\} \), which equals to
where \(\mathcal G := \mathcal T\times \mathcal A\) and \(\mathcal H := (T,\infty )\times K\cup \{(\infty ,\varDelta )\}\). Note that the non-explosive condition posed in (e) and reinforced by assumption (f) on \(\nu (\cdot )\) implies that \(\lim \limits _{n\rightarrow \infty }\mathbb P\left\{ \mathbb {W}^n\mathbb {S}\right\} = 0\), which is a necessary condition for the series (7) to converge. We show in the rest of this section that the infinite series (7) converges and is approximately computable via its partial sums under a mild assumption on the jump rate function (a bound on the integral of \(\lambda (\cdot )\) over the interval \(\mathcal T\)).
The reformulation of \(p_{\mathcal A}(t_0,x_0,T)\) as (7) indicates its close relationship with the infinite-horizon probabilistic reach-avoid specification over DTMPs. This problem is studied in [32, 33], which formulate the solution as a Bellman equation and describe convergence properties of the series based on contractivity of the stochastic operator associated to the DTMP. The next theorem can be seen as an extension of [33, Section 3.1] and presents a Bellman equation for the characterization of the safety probability \(p_{\mathcal A}(t_0,x_0,T)\), which is an equation for the infinite-horizon reach-avoid problem over the DTMP \(\mathfrak M\) with the safe set \((\mathcal G\cup \mathcal H)\in \mathcal S\) and target set \(\mathcal H\in \mathcal S\).
Theorem 2
The solution of the probabilistic safety problem defined in (6) can be characterized as \(p_{\mathcal A}(t_0,x_0,T) = V(t_0,x_0)- \mathbf 1_{\mathcal H}(t_0,x_0)\), where the value function \(V:S\rightarrow [0,1]\) is the least solution of the fixed-point Bellman equation
In order to characterize the solution of the fixed-point equation (8), we consider the value functions \(V_n:S\rightarrow [0,1]\), \(k\in \mathbb N\), for the finite-horizon reach-avoid probability \(V_n(s) := \mathbb P\left\{ \mathbb {S},\mathbb {W}\mathbb {S}\mathbb {,}\mathbb {W}^2\mathbb {S},\ldots ,\mathbb {W}^n\mathbb {S}\right\} \). These functions satisfy the Bellman recursion
Then we have that \(V(s) = \lim _{n\rightarrow \infty }V_n(s)\), where the limit is point-wise non-decreasing [33, Section 3.1]. Equation (9) indicates that the support of the value functions \(V_n(\cdot )\) is bounded by the set \(\mathcal G\cup \mathcal H\). These value functions are equal to one over the set \(\mathcal H\) and satisfy the following recursion for any \(s=(t,x)\in \mathcal G\):
In the following we provide an operator perspective to (10), show that the associated operator is contractive, and quantify an upper bound for the quantity \(\Vert V-V_n\Vert \) as a function of n.
Let \(\mathbb B\) denote the space of all real-valued, bounded and measurable functions on \(\mathcal G\). Then \(\mathbb B\) is a Banach space with a norm given by \(\Vert f\Vert := \sup _{s\in \mathcal G}|f(s)|\) for \(f\in \mathbb B\). An operator \(\mathcal J:\mathbb B\rightarrow \mathbb B\) is called linear if
The quantity \(\Vert \mathcal J\Vert = \sup _{\Vert f\Vert \le 1}\Vert \mathcal J(f)\Vert \) is called the norm of the linear operator \(\mathcal J\). We say that a linear operator \(\mathcal J\) is a contraction whenever it holds that \(\Vert \mathcal J\Vert <1\). We define the linear operator \(\mathcal I_{\mathcal G} f(s) := \int _{\mathcal G}f(\bar{s})Q(s,d\bar{s}),\) which is associated with equation (10). The following lemma raises assumptions on the jump rate function \(\lambda \) to render the operator \(\mathcal I_{\mathcal G}\) contractive.
Lemma 1
For a given set \(\mathcal G = \mathcal T\times \mathcal A\), with \(\mathcal T = [0,T]\) and bounded safe set \(\mathcal A\), suppose there exists a finite constant \(\kappa \ge \sup \left\{ \int _0^T\lambda (r,x)dr,\,x\in \mathcal A\right\} \). Then the invariance operator \(\mathcal I_{\mathcal G}\) is contractive with the norm \(\Vert \mathcal I_{\mathcal G}\Vert \le 1-e^{-\kappa }\).
Theorem 3
Under the assumption of Lemma 1 the sequence \(\{V_n\}_{n\in \mathbb N}\) satisfies
and converges uniformly to \(V(\cdot )\). Moreover, \(\Vert V-V_n\Vert \le (1-e^{-\kappa })^n\) for all \(n\in \mathbb N\).
The previous result allows us to select a sufficiently large n in order to make the difference between V and \(V_n\) smaller than a predefined threshold. For a given threshold, say \(\epsilon _1\), one can select \(N\ge \ln \epsilon _1/\ln \left( 1-e^{-\kappa }\right) \) and compute \(V_N\). Theorem 3 then guarantees that \(|V(s)-V_N(s)|\le \epsilon _1\) for all \(s\in S\). The next section is devoted to the precise computation of \(V_N\) over the uncountable state space S, for a preselected N.
4 Finite DTMCs as Formal Approximations of cPJMPs
In the previous sections we have shown that the bounded-time safety verification of the given cPJMP can be approximated by a step-bounded reach-avoid verification of a DTMP, with guaranteed error bounds. Due to lack of analytical solutions, the verification of DTMPs against PCTL specifications (amongst which reach-avoid) is studied in the literature via finite abstractions [1, 16], which result in the PCTL verification of discrete time, finite space Markov chains (DTMCs) [17, 18]. In other words, the goal of the DTMC abstraction is to provide a discrete and automated computation of the reach-avoid probability. The approach is formal in that it allows for the computation of explicit bounds on the error associated with the abstraction.
The DTMC is obtained by state-space partitioning of the DTMP: equation (10) indicates that we only need to partition the bounded set \(\mathcal G\). The abstraction procedure, presented in Algorithm 1, generates a DTMC \((S_a,P_a)\) with finite state space \(S_a\) and transition probability matrix \(P_a\). Over this DTMC we compute \(p_a(s_i,N)\), which is the probability of reaching target state \(s_{m+1}\), while avoiding the unsafe state \(s_{m+2}\), during the step horizon \(0,\ldots ,N\), as a function of the initial state \(s_i\in S_a\). This is obtained via a discrete version of equation (10), which boils down to via matrix manipulations [8].
We now introduce some regularity assumptions on the jump rate function \(\lambda (\cdot )\) (Assumption 1) and on the jump measure \(\pi (\cdot )\) (Assumption 2), which are needed to quantify the abstraction error resulting from the DTMC \((S_a, P_a)\).
Assumption 1
Assume the space K is endowed with a metric \(\rho :K\times K\rightarrow \mathbb R\). Suppose the jump rate function \(\lambda (\cdot )\) is bounded and Lipschitz-continuous, namely that there are finite constants \(\varLambda \) and \(h_\lambda \) such that \(\lambda (t,x)\le \varLambda \) and
for all \((t,x),(t,x')\in \mathcal G\).
Assumption 1 implies the Lipschitz continuity of \(g(\cdot )\).
Lemma 2
Under Assumption 1, the function \(g(\cdot )\) in (10) is Lipschitz continuous, namely for all \(s =(t,x),s' = (t',x')\in \mathcal G\),
The next assumption is on the regularity of the jump measure \(\pi (\cdot )\) through its associated density function.
Assumption 2
Let \(\mathcal K\) be the Borel sigma-algebra on K. Assume that the jump measure \(\pi \) on \((K,\mathcal K)\) given \((\mathbb R_{\ge 0}\times K,\mathcal B(\mathbb R_{\ge 0})\otimes \mathcal K)\) is an integral kernel, i.e. that there exists a sigma-finite basis measure \(\mu \) on \((K,\mathcal K)\) and a jointly measurable function \(p:\mathbb R_{\ge 0}\times K\times K\rightarrow \mathbb R_{\ge 0}\) such that \(\pi (t,x,dy) = p(t,x,y)\mu (dy)\), i.e. \(\pi (t,x,A) = \int _A p(t,x,y)\mu (dy)\) for any \((t,x)\in \mathbb R_{\ge 0}\times K, A\in \mathcal K\). Suppose further that the density function \(p(\tau ,x,y)\) is Lipschitz-continuous, namely that there exists a finite constant \(h_p\), such that
Example 6
The density function p(t, x, y) is computable for the dynamical system representation (5) in Example 4 under suitable assumptions on vector field f given the density function of \(\zeta (\cdot )\) [19, 21]. \(\square \)
Remark 1
Assumption 2 enables us to specify the conditional density function of the DTMP \((T_n,X_{T_n})_{n\in \mathbb N}\) as
which gives the integral representation of the stochastic kernel of the process as \(Q(t,x,(a,b),A) = \int _a^b\int _A t_{\mathfrak s}(\tau ,y|t,x) \mu (dy)d\tau \). \(\square \)
Using Assumptions 1 and 2, and its consequences Theorem 3 and Lemmas 1, 2, we finally establish the following result for the error computation of the abstraction.
Theorem 4
Under Assumptions 1 and 2, the following inequality holds:
where \(h_x = h_p\mu (\mathcal A) + 3Th_\lambda , h_t = 3\varLambda \), whereas \(\kappa \) is defined in Lemma 1. The constants \(\delta _x,\delta _t\) denote the partition diameters of state-space and time, namely
In the inequality above, \(s_{\mathfrak r}\) is the representative point of the partition set to which the state \((t_0,x_0)\) belongs, and \(p_a(s_{\mathfrak r},N)\) is the reach-avoid probability computed over the DTMC \((S_a,P_a)\) with finite step-horizon N.
Notice that there are two terms contributing to the error in Theorem 4. The first term is caused by replacing the discrete infinite-step reach-avoid problem with an N-step one. The second term results from the DTMC abstraction. Augmenting the number of steps N decreases the first term exponentially and increases the second term linearly: as such, this upper bound on the error can be tuned by selecting a sufficiently large step-horizon N, and accordingly small partition diameters \(\delta _t,\delta _x\).
5 Case Study: Thermostatically Controlled Loads
Thermostatically Controlled Loads (TCLs) have shown potential to be engaged in power system services such as load shifting, peak shaving, and demand response programs. Recent studies have focused on the development of models for aggregated populations of TCLs [12, 20, 28]. Formal abstraction techniques have also been employed to verify properties of TCL models [2, 20]. We employ the model of a TCL as the case study in this paper. The model describes the continuous-time evolution of the temperature in a TCL by a linear SDE. The value of the temperature is available to a thermostat for regulation via a network of independent asynchronous sensors [3, 29]. We recast this model as a cPJMP and quantitatively verify user comfort as a probabilistic safety problem.
Dynamical Model for the Case Study. The continuous-time evolution of the temperature \(\theta = \{\theta _t,\,\, t\in \mathbb R_{\ge 0}\}\) in a cooling TCL can be specified by the following linear SDE:
where \(\{W_t,\,\,t\in \mathbb R_{\ge 0}\}\) is the standard Brownian motion, \(\theta _a\) is the ambient temperature, C and R indicate the thermal capacitance and resistance, \(P_{rate}\) is the rate of energy transfer, and \(\sigma \) is standard deviation of the noise term. The process \(\{q_t,\,\,t\in \mathbb R_{\ge 0}\}\) represents the state of the thermostat at time t, \(q_t\in \{0,1\}\) for OFF and ON modes (the latter meaning that the cooler is functioning), respectively. For a given temperature \(\theta _{t}\) at time t and a fixed mode \(q_{t}\), the temperature at time \(s\ge t\) is characterized by the solution of (11), namely
where \(a = \exp [-(s-t)/RC]\) and \(w_{s} \sim \mathcal N\left( 0,\frac{1}{2}\sigma ^2 RC(1-a^2)\right) \).
We assume the value of temperature is available to the thermostat via a network of sensors at possibly non-uniform time samples \(\{\tau _n,n\in \mathbb N\}\). For a network of independent and asynchronous sensors, the time between two consecutive available values of temperature \((\tau _{n+1}-\tau _n)\), when the number of sensors is large, can be approximated by an exponential distribution [3, 29]. We assume that the associated rate depends on temperature, \(\lambda (\theta _{\tau _n})\), where \(\theta _{\tau _n}\) is the latest available temperature (at time \(\tau _n\)).
The temperature of the cooling TCL is regulated by updating the thermostat mode via the equation \(q_{\tau _{n+1}} = f(q_{\tau _n},\theta _{\tau _{n+1}})\), which is based on discrete switching
where \(\theta _{\mathfrak s}\) denotes a given temperature set-point and \(\delta _{\mathfrak d}\) a dead-band, and together characterize the temperature operating range. Then the mode \(q_t\) is a piecewise-constant and right-continuous function of time, which can change value from \(q_{\tau _n}\) to \(q_{\tau _{n+1}}\) at time \(\tau _{n+1}\) according to the logic in (12).
cPJMP for the Case Study. The values of temperature and the mode of the thermostat evolve over the hybrid state space \(K = \{0,1\}\times \mathbb R\), namely a space made up of discrete and continuous components [2]. The temperature space \(\mathbb R\) is endowed with the Euclidean metric and with the Borel sigma-algebra. The jump measure of the process is an integral kernel (Assumption 2 is valid), with \(\mu \) being the Lebesgue measure and with the density function
where \(\varvec{\delta }_{\mathfrak d}[\cdot ]\) is the Kronecker delta function, \(\phi (\cdot ;\bar{m},\bar{\sigma }^2)\) is the Gaussian density function with mean \(\bar{m}\) and variance \(\bar{\sigma }^2\), and
We are interested in quantifying a proxy for user comfort: we quantify whether the likelihood of having the temperature inside a dead-band \([\theta _-,\theta _+]\) during the time interval [0, T] is greater than a given threshold. This problem can be mathematically formulated as computing the safety probability of the model over the safe set \(\mathcal A = \{0,1\}\times [\theta _-,\theta _+]\).
Note that the density function \(\pi (\cdot )\) is slightly different from the general formulation of cPJMPs in Sect. 2 in that it depends on \((\tau -t)\) (through \(a(\cdot )\)), instead of just the jump time \(\tau \). This difference requires a slight modification of the abstraction error, which is presented next.
Computation of Probabilistic Safety. We consider a jump rate function \(\lambda (t,\theta ) = \lambda _0e^{-\alpha t}\cosh [2\beta (\theta -\theta _{\mathfrak s})]\) with positive constants \(\lambda _0,\alpha ,\) and \(\beta \). The term \(e^{-\alpha t}\) models the reduction of the sampling rate of the sensors in time. The cosine hyperbolic function \(\cosh [2\beta (\theta -\theta _{\mathfrak s})]\) shows that more frequent temperature measurements are provided by the sensors for larger deviation of the temperature from the set-point. The assumption raised on the jump rate function in Lemma 1 holds with constant \(\kappa = \lambda _0\cosh (\beta \delta _{\mathfrak d})/\alpha \), whereas Assumption 1 holds with \(h_\lambda = 2\lambda _0\beta \sinh (\beta \delta _{\mathfrak d})\) and \(\varLambda =\lambda _0\cosh (\beta \delta _{\mathfrak d})\). The application of the abstraction technique presented in this paper to the case study leads to the error
with constants \(h_1,h_2,h_3\) defined as
The additional terms contributing to the error, in comparison with the results of Theorem 4, are due to the dependence of the mean and variance of the Gaussian density function \(\phi \) from the current time t. We use the values in Fig. 2 (left) for the parameters in the numerical simulation. The standard deviation of the process noise is \(\sigma = 0.1\,[^\circ C s^{-1/2}]\). The time bound for the safety specification is \(T = 1 [h]\). The parameters of the jump rate functions are \(\alpha =1,\beta =1,\lambda _0 =1\), which means if the TCL is initialized at the set-point, the rate of temperature observations is 20 times higher than the decay rate of the TCL (1 / RC).
We have implemented Algorithm 1 for the abstraction and computation of safety probability over the model using the software tool FAUST \(^{\mathsf 2}\) [22]. Figure 2 (right) shows the error bound from (13), as a function of numbers of partition bins for the temperature \(n_\theta \) and the time \(n_t\), with a fixed step-horizon \(N=8\). One can see that for instance the abstraction algorithm guarantees an error bound of 0.23 by selecting \(n_\theta =n_t=4\times 10^3\) (\(\delta _\theta = 1.25\times 10^{-4}, \delta _t = 2.5\times 10^{-4}\)), which generates a DTMC with \(3.2\times 10^7\) states. This indicates that meaningful error bounds (less than one) may lead to large DTMCs.
The derived error bounds can be in general conservative. To demonstrate the conservativeness of bounds, we perform the analysis with partition diameters \(\delta _\theta = \delta _t = 0.0125\) (\(n_\theta = 40, n_t=80\)), which result in a DTMC with 6400 states for which the error bounds are not meaningful. Figure 3 (top row) shows the computed safety probabilities as a function of initial temperature \(\theta _0\) at initial time \(t_0\), the left plot for ON mode and the right plot for OFF mode. Figure 3, bottom row, shows the safety likelihood estimated via Monte Carlo simulations with 1000 runs initialized at the representative points used in Algorithm 1. The computation and the estimation are very close to each other with a maximum relative difference of \(12\,\%\). The results suggest that the error bounds can be reduced by employing advanced gridding techniques [18, 21].
6 Conclusions
We have presented an abstraction-based safety verification procedure for pure jump Markov processes with continuous states. While the focus of the work has been on the study of probabilistic safety, the technique can be extended to verify richer temporal properties. The errors can be sharpened via adaptive, non-uniform schemes [18, 19]. cPJMP are a generalization of CTMC with an assumption of constant values in between jumps: we plan to investigate the challenging problem of non-constant dynamics between jumps [15].
Notes
- 1.
Recall that a (homogeneous) Poisson process \({\{ \mathcal N_t \mid t \ge 0 \}}\) with rate \(\lambda \) is a Lévy process with \(\mathcal N_0 = 0\) and \(\mathbb P\{\mathcal N_t = n\} = \frac{(\lambda t)^n}{n!} e^{-\lambda t}\).
- 2.
The solution can be derived observing that the process satisfies the recursive equation \(X_{\tau _{n+1}}-X_{\tau _n} = c_0 X_{\tau _n}\), where the jumps occur at \(\tau _n,\,\, n=1,2,3,\ldots \) according to \(\mathcal N_t\).
- 3.
A slight modification of the approach presented in this paper allows for verifying more general quantitative questions such as \(\mathbb P_{\sim p}(\varPhi \,\mathsf U^{(T_1,T_2)}\varPsi )\), defined over any state labels \(\varPsi ,\varPhi \) and over any (possibly unbounded) time interval \((T_1,T_2)\) – an adaptation is required on the construction of sets \(\mathcal G,\mathcal H\) in (7).
References
Abate, A., Katoen, J.-P., Lygeros, J., Prandini, M.: Approximate model checking of stochastic hybrid systems. Eur. J. Control 6, 624–641 (2010)
Abate, A., Prandini, M., Lygeros, J., Sastry, S.: Probabilistic reachability and safety for controlled discrete time stochastic hybrid systems. Automatica 44(11), 2724–2734 (2008)
Aït-Sahalia, Y., Mykland, P.A.: The effects of random and discrete sampling when estimating continuous-time diffusions. Econometrica 71(2), 483–549 (2003)
Allen, L.J.S.: An Introduction to Stochastic Processes with Applications to Biology. Pearson/Prentice Hall, Englewood Cliffs (2003)
Aziz, A., Sanwal, K., Singhal, V., Brayton, R.: Verifying continuous time Markov chains. In: Alur, R., Henzinger, T.A. (eds.) CAV 1996. LNCS, vol. 1102, pp. 269–276. Springer, Heidelberg (1996)
Baier, C., Haverkort, B., Hermanns, H., Katoen, J.-P.: Model checking continuous-time Markov chains by transient analysis. In: Allen Emerson, E., Prasad Sistla, A. (eds.) CAV 2000. LNCS, vol. 1855, pp. 358–372. Springer, Heidelberg (2000)
Baier, C., Haverkort, B., Hermanns, H., Katoen, J.-P.: Model-checking algorithms for continuous-time Markov chains. IEEE Trans. Softw. Eng. 29(6), 524–541 (2003)
Baier, C., Katoen, J.-P.: Principles of Model Checking. MIT Press, Cambridge (2008)
Baier, C., Katoen, J.-P., Hermanns, H.: Approximate symbolic model checking of continuous-time Markov chains (extended abstract). In: Baeten, J.C.M., Mauw, S. (eds.) CONCUR 1999. LNCS, vol. 1664, pp. 146–161. Springer, Heidelberg (1999)
Bertsekas, D.P., Shreve, S.E.: Stchastic Optimal Control: the Discrete-Time Case. Athena Scientific, Belmont (1996)
Billingsley, P.: Convergence of Probability Measures. Wiley, New York (1999)
Callaway, D.S.: Tapping the energy storage potential in electric loads to deliver load following and regulation, with application to wind energy. Ener. Convers. Manag. 50(5), 1389–1400 (2009)
Chen, T., Diciolla, M., Kwiatkowska, M., Mereacre, A.: Time-bounded verification of CTMCs against real-time specifications. In: Fahrenberg, U., Tripakis, S. (eds.) FORMATS 2011. LNCS, vol. 6919, pp. 26–42. Springer, Heidelberg (2011)
Chen, T., Han, T., Katoen, J.-P., Mereacre, A.: Model checking of continuous-time Markov chains against timed automata specifications. Logical Methods Comput. Sci. 7(1), 1–34 (2011)
Davis, M.H.A.: Markov Models and Optimization. Chapman & Hall/CRC Press, London (1993)
Esmaeil Zadeh Soudjani, S., Abate, A.: Adaptive gridding for abstraction and verification of stochastic hybrid systems. In: QEST, pp. 59–69 (2011)
Esmaeil Zadeh Soudjani, S., Abate, A.: Higher-order approximations for verification of stochastic hybrid systems. In: Chakraborty, S., Mukund, M. (eds.) ATVA 2012. LNCS, vol. 7561, pp. 416–434. Springer, Heidelberg (2012)
Esmaeil Zadeh Soudjani, S., Abate, A.: Adaptive and sequential gridding procedures for the abstraction and verification of stochastic processes. SIAM J. Appl. Dyn. Syst. 12(2), 921–956 (2013)
Esmaeil Zadeh Soudjani, S., Abate, A.: Precise approximations of the probability distribution of a Markov process in time: an application to probabilistic invariance. In: Ábrahám, E., Havelund, K. (eds.) TACAS 2014 (ETAPS). LNCS, vol. 8413, pp. 547–561. Springer, Heidelberg (2014)
Esmaeil Zadeh Soudjani, S., Abate, A.: Aggregation and control of populations of thermostatically controlled loads by formal abstractions. IEEE Trans. Control Syst. Technol. 23(3), 975–990 (2015)
Soudjani, S.E.Z., Abate, A.: Quantitative approximation of the probability distribution of a Markov process by formal abstractions. Logical Methods Comput. Sci 11(3), 1–29 (2015)
Esmaeil Zadeh Soudjani, S., Gevaerts, C., Abate, A.: \({\sf FAUST^2}\): formal abstractions of uncountable-state stochastic processes. In: Baier, C., Tineli, C. (eds.) TACAS 2015. LNCS, vol. 9035, pp. 272–286. Springer, Heidelberg (2015)
Gihman, I.I., Skorokhod, A.V.: The Theory of Stochastic Processes: II. Die Grundlehren der mathematischen Wissenschaften, vol. 218. Springer, Heidelberg (1975)
Hinton, A., Kwiatkowska, M., Norman, G., Parker, D.: PRISM: a tool for automatic verification of probabilistic systems. In: Hermanns, H., Palsberg, J. (eds.) TACAS 2006. LNCS, vol. 3920, pp. 441–444. Springer, Heidelberg (2006)
Jacod, J., Shiryaev, A.: Limit Theorems for Stochastic Processes. Grundlehren der mathematischen Wissenschaften. Springer, Heidelberg (2010)
Kallenberg, O.: Foundations of Modern Probability. Probability and its Applications. Springer, New York (2002)
Katoen, J.-P., Khattri, M., Zapreev, I.S.: A Markov reward model checker. In: QEST, pp. 243–244. IEEE (2005)
Mathieu, J.L., Koch, S., Callaway, D.S.: State estimation and control of electric loads to manage real-time energy imbalance. IEEE Trans. Power Syst. 28(1), 430–440 (2013)
Micheli, M., Jordan, M.: Random sampling of a continuous-time stochastic dynamical system. In: Proceedings of the 15th International Symposium on the Mathematical Theory of Networks and Systems (MTNS), pp. 1–15 (2002)
Panangaden, P.: Labelled Markov Processes. Imperial College Press, London (2009)
Platen, E., Bruti-Liberati, N.: Numerical Solution of Stochastic Differential Equations with Jumps in Finance. Stochastic Modelling and Applied Probability. Springer, Heidelberg (2010)
Tkachev, I., Abate, A.: On infinite-horizon probabilistic properties and stochastic bisimulation functions. In: Proceedings of the 50th IEEE Conference on Decision and Control and European Control Conference, pp. 526–531, Orlando, FL, December 2011
Tkachev, I., Abate, A.: Characterization and computation of infinite-horizon specifications over Markov processes. Theor. Comput. Sci. 515, 1–18 (2014)
Van Kampen, N.G.: Stochastic Processes in Physics and Chemistry. North-Holland Personal Library. Elsevier Science, Amsterdam (2011)
Ziai, Y.: Statistical models of claim amount distributions in general insurance. Ph.D. thesis, School of Engineering and Maths, City University London (1979)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2016 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Esmaeil Zadeh Soudjani, S., Majumdar, R., Abate, A. (2016). Safety Verification of Continuous-Space Pure Jump Markov Processes. In: Chechik, M., Raskin, JF. (eds) Tools and Algorithms for the Construction and Analysis of Systems. TACAS 2016. Lecture Notes in Computer Science(), vol 9636. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-49674-9_9
Download citation
DOI: https://doi.org/10.1007/978-3-662-49674-9_9
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-662-49673-2
Online ISBN: 978-3-662-49674-9
eBook Packages: Computer ScienceComputer Science (R0)