A Hierarchy of Scheduler Classes for Stochastic Automata
Abstract
Stochastic automata are a formal compositional model for concurrent stochastic timed systems, with general distributions and nondeterministic choices. Measures of interest are defined over schedulers that resolve the nondeterminism. In this paper we investigate the power of various theoretically and practically motivated classes of schedulers, considering the classic completeinformation view and a restriction to nonprophetic schedulers. We prove a hierarchy of scheduler classes w.r.t. unbounded probabilistic reachability. We find that, unlike Markovian formalisms, stochastic automata distinguish most classes even in this basic setting. Verification and strategy synthesis methods thus face a tradeoff between powerful and efficient classes. Using lightweight scheduler sampling, we explore this tradeoff and demonstrate the concept of a useful approximative verification technique for stochastic automata.
1 Introduction
The need to analyse continuoustime stochastic models arises in many practical contexts, including critical infrastructures [4], railway engineering [36], space mission planning [7], and security [28]. This has led to a number of discrete event simulation tools, such as those for networking [34, 35, 42], whose probabilistic semantics is founded on generalised semiMarkov processes (GSMP [21, 33]). Nondeterminism arises through inherent concurrency of independent processes [11], but may also be deliberate underspecification. Modelling such uncertainty with probability is convenient for simulation, but not always adequate [3, 29]. Various models and formalisms have thus been proposed to extend continuoustime stochastic processes with nondeterminism [8, 10, 19, 23, 27, 38]. It is then possible to verify such systems by considering the extremal probabilities of a property. These are the supremum and infimum of the probabilities of the property in the purely stochastic systems induced by classes of schedulers (also called strategies, policies or adversaries) that resolve all nondeterminism. If the nondeterminism is considered controllable, one may alternatively be interested in the planning problem of synthesising a scheduler that satisfies certain probability bounds.
We consider closed systems of stochastic automata (SA [16]), which extend GSMP and feature both generally distributed stochastic delays as well as discrete nondeterministic choices. The latter may arise from noncontinuous distributions (e.g. deterministic delays), urgent edges, and edges waiting on multiple clocks. Numerical verification algorithms exist for very limited subclasses of SA only: Buchholz et al. [13] restrict to phasetype or matrixexponential distributions, such that nondeterminism cannot arise (as each edge is guarded by a single clock). Bryans et al. [12] propose two algorithms that require an a priori fixed scheduler, continuous bounded distributions, and that all active clocks be reset when a location is entered. The latter forces regeneration on every edge, making it impossible to use clocks as memory between locations. Regeneration is central to the work of Ballarini et al. [6], however they again exclude nondeterminism. The only approach that handles nondeterminism is the regionbased approximation scheme of Kwiatkowska et al. [30] for a model closely related to SA, but restricted to bounded continuous distributions. Without that restriction [22], error bounds and convergence guarantees are lost.
Evidently, the combination of nondeterminism and continuous probability distributions is a particularly challenging one. With this paper, we take on the underlying problem from a fundamental perspective: we investigate the power of, and relationships between, different classes of schedulers for SA. Our motivation is, on the one hand, that a clear understanding of scheduler classes is crucial to design verification algorithms. For example, Markov decision process (MDP) model checking works well because memoryless schedulers suffice for reachability, and the efficient timebounded analysis of continuoustime MDP (CTMDP) exploits a relationship between two scheduler classes that are sufficiently simple, but on their own do not realise the desired extremal probabilities [14]. When it comes to planning problems, on the other hand, practitioners desire simple solutions, i.e. schedulers that need little information and limited memory, so as to be explainable and suitable for implementation on e.g. resourceconstrained embedded systems. Understanding the capabilities of scheduler classes helps decide on the tradeoff between simplicity and the ability to attain optimal results.
We use two perspectives on schedulers from the literature: the classic completeinformation residual lifetimes semantics [9], where optimality is defined via historydependent schedulers that see the entire current state, and nonprophetic schedulers [25] that cannot observe the timing of future events. Within each perspective, we define classes of schedulers whose views of the state and history are variously restricted (Sect. 3). We prove their relative ordering w.r.t. achieving optimal reachability probabilities (Sect. 4). We find that SA distinguish most classes. In particular, memoryless schedulers suffice in the completeinformation setting (as is implicit in the method of Kwiatkowska et al. [30]), but turn out to be suboptimal in the more realistic nonprophetic case. Considering only the relative order of clock expiration times, as suggested by the first algorithm of Bryans et al. [12], surprisingly leads to partly suboptimal, partly incomparable classes. Our distinguishing SA are small and employ a common nondeterministic gadget. They precisely pinpoint the crucial differences and how schedulers interact with the various features of SA, providing deep insights into the formalism itself.
Our study furthermore forms the basis for the application of lightweight scheduler sampling (LSS) to SA. LSS is a technique to use Monte Carlo simulation/statistical model checking with nondeterministic models. On every LSS simulation step, a pseudorandom number generator (PRNG) is reseeded with a hash of the identifier of the current scheduler and the (restricted) information about the current state (and previous states, for historydependent schedulers) that the scheduler’s class may observe. The PRNG’s first iterate then determines the scheduler’s action deterministically. LSS has been successfully applied to MDP [18, 31, 32] and probabilistic timed automata [15, 26]. Using only constant memory, LSS samples schedulers uniformly from a selected scheduler class to find “nearoptimal” schedulers that conservatively approximate the true extremal probabilities. Its principal advantage is that it is largely indifferent to the size of the state space and of the scheduler space; in general, sampling efficiency depends only on the likelihood of selecting nearoptimal schedulers. However, the mass of nearoptimal schedulers in a scheduler class that also includes the optimal scheduler may be less than the mass in a class that does not include it. Given that the mass of optimal schedulers may be vanishingly small, it may be advantageous to sample from a class of less powerful schedulers. We explore these tradeoffs and demonstrate the concept of LSS for SA in Sect. 5.
Other Related Work. Alur et al. first mention nondeterministic stochastic systems similar to SA in [2]. Markov automata (MA [19]), interactive Markov chains (IMC [27]) and CTMDP are special cases of SA restricted to exponential distributions. Song et al. [37] look into partial information distributed schedulers for MA, combining earlier works of de Alfaro [1] and Giro and D’Argenio [20] for MDP. Their focus is on information flow and hiding in parallel specifications. Wolf et al. [39] investigate the power of classic (timeabstract, deterministic and memoryless) scheduler classes for IMC. They establish (nonstrict) subset relationships for almost all classes w.r.t. trace distribution equivalence, a very strong measure. Wolovick and Johr [41] show that the class of measurable schedulers for CTMDP is complete and sufficient for reachability problems.
2 Preliminaries
For a given set S, its power set is \(\mathcal {P}({S}) \). We denote by \(\mathbb {R}\), \(\mathbb {R}^+ \), and \(\mathbb {R}^{+}_{0} \) the sets of real numbers, positive real numbers and nonnegative real numbers, respectively. A (discrete) probability distribution over a set \(\varOmega \) is a function \(\mu :\varOmega \rightarrow [0, 1]\), such that Open image in new window is countable and Open image in new window . \(\mathrm {Dist}({\varOmega }) \) is the set of probability distributions over \(\varOmega \). We write \(\mathcal {D}(\omega ) \) for the Dirac distribution for \(\omega \), defined by \(\mathcal {D}(\omega ) (\omega ) = 1\). \(\varOmega \) is measurable if it is endowed with a \(\sigma \)algebra \(\sigma (\varOmega )\): a collection of measurable subsets of \(\varOmega \). A (continuous) probability measure over \(\varOmega \) is a function \(\mu :\sigma (\varOmega ) \rightarrow [0, 1]\), such that \(\mu (\varOmega )=1\) and \(\mu (\cup _{i \in I}\, B_i) = \sum _{i \in I}\, \mu (B_i)\) for any countable index set I and pairwise disjoint measurable sets \(B_i\subseteq \varOmega \). \(\mathrm {Prob}({\varOmega })\) is the set of probability measures over \(\varOmega \). Each \(\mu \in \mathrm {Dist}({\varOmega }) \) induces a probability measure. Given probability measures \(\mu _1\) and \(\mu _2\), we denote by \(\mu _1 \otimes \mu _2\) the product measure: the unique probability measure such that \((\mu _1 \otimes \mu _2)(B_1 \times B_2) = \mu _1(B_1) \cdot \mu _2(B_2)\), for all measurable \(B_1\) and \(B_2\). For a collection of measures \((\mu _i)_{i\in I}\), we analogously denote the product measure by \(\bigotimes _{i \in I} \mu _i\). Let Open image in new window be the set of valuations for an (implicit) set V of (nonnegative realvalued) variables. \(\mathbf {0} \in Val \) assigns value zero to all variables. Given \(X\subseteq V\) and \(v \in Val \), we write v[X] for the valuation defined by \(v[X](x) = 0\) if \(x \in X\) and \(v[X](y) = v(y)\) otherwise. For \(t \in \mathbb {R}^{+}_{0} \), \(v + t\) is the valuation defined by \((v + t)(x) = v(x) + t\) for all \(x \in V\).
Stochastic Automata [16] extend labelled transition systems with stochastic clocks: realvalued variables that increase synchronously with rate 1 over time and expire some random amount of time after having been restarted. Formally:
Definition 1
A stochastic automaton (SA) is a tuple Open image in new window , where \( Loc \) is a countable set of locations, \(\mathcal {C} \) is a finite set of clocks, \(A \) is the finite action alphabet, and \(E : Loc \rightarrow \mathcal {P}({\mathcal {P}({\mathcal {C}}) \times A \times \mathcal {P}({\mathcal {C}}) \times \mathrm {Dist}({ Loc })}) \) is the edge function, which maps each location to a finite set of edges that in turn consist of a guard set of clocks, a label, a restart set of clocks and a distribution over target locations. \(F :\mathcal {C} \rightarrow \mathrm {Prob}({\mathbb {R}^{+}_{0}})\) is the delay measure function that maps each clock to a probability measure, and \({\ell _ init \in Loc }\) is the initial location.
We also write \(\ell \xrightarrow {{{\scriptstyle {G, a, R}}}}_E \mu \) for Open image in new window . W.l.o.g. we restrict to SA where edges are fully characterised by source state and action label, i.e. whenever \(\ell \xrightarrow {{{\scriptstyle {G_1, a, R_1}}}}_E \mu _1\) and \(\ell \xrightarrow {{{\scriptstyle {G_2, a, R_2}}}}_E \mu _2\), then \(G_1 = G_2\), \(R_1 = R_2\) and \(\mu _1 = \mu _2\).
Example 1
We show an example SA, \(M_0\), in Fig. 1. Its initial location is \(\ell _0\). It has two clocks, x and y, with F(x) and F(y) both being the continuous uniform distribution over the interval [0, 1]. No time can pass in locations \(\ell _0\) and \(\ell _1\), since they have outgoing edges with empty guard sets. We omit action labels and assume every edge to have a unique label. On entering \(\ell _1\), both clocks are restarted. The choice of going to either \(\ell _2\) or \(\ell _3\) from \(\ell _1\) is nondeterministic, since the two edges are always enabled at the same time. In \(\ell _2\), we have to wait until the first of the two clocks expires. If that is x, we have to move to location Open image in new window ; if it is y, we have to move to Open image in new window . The probability that both expire at the same time is zero. Location \(\ell _3\) behaves analogously, but with the target states interchanged.
Timed Probabilistic Transition Systems form the semantics of SA. They are finitelynondeterministic uncountablestate transition systems:
Definition 2
A (finitely nondeterministic) timed probabilistic transition system (TPTS) is a tuple Open image in new window . \(S \) is a measurable set of states. \(A ' = \mathbb {R}^+ \uplus A \) is the alphabet, partitioned into delays in \(\mathbb {R}^+\) and jumps in \(A \). \(T :S \rightarrow \mathcal {P}({A ' \times \mathrm {Prob}({S})}) \) is the transition function, which maps each state to a finite set of transitions, each consisting of a label in \(A '\) and a measure over target states. The initial state is Open image in new window . For all \(s \in S \), we require \(T (s) = 1\) if Open image in new window , i.e. states admitting delays are deterministic.
We also write \(s \xrightarrow {{{\scriptstyle {a}}}}_T \mu \) for Open image in new window . A run is an infinite alternating sequence \(s_0 a_0 s_1 a_1 \!\ldots \in (S \times A ')^\omega \), with Open image in new window . A history is a finite prefix of a run ending in a state, i.e. an element of \((S \times A ')^* \times S \). Runs resolve all nondeterministic and probabilistic choices. A scheduler resolves only the nondeterminism:
Definition 3
A measurable function \(\mathfrak {s} :(S \times A ')^* \times S \rightarrow \mathrm {Dist}({A ' \times \mathrm {Prob}({S})}) \) is a scheduler if, for all histories \(h \in (S \times A ')^* \times S \), Open image in new window implies \( lst _h \xrightarrow {{{\scriptstyle {a}}}}_T \mu \), where \( lst _h\) is the last state of h.
Once a scheduler has chosen \(s_i \xrightarrow {{{\scriptstyle {a}}}}_T \mu \), the successor state \(s_{i+1}\) is picked randomly according to \(\mu \). Every scheduler \(\mathfrak {s} \) defines a probability measure \(\mathbb {P}_\mathfrak {s} \) on the space of all runs. For a formal definition, see [40]. As is usual, we restrict to nonZeno schedulers that make time diverge with probability one: we require \(\mathbb {P}_\mathfrak {s} (\varPi _\infty ) = 1\), where \(\varPi _\infty \) is the set of runs where the sum of delays is \(\infty \). In the remainder of this paper we consider extremal probabilities of reaching a set of goal locations G:
Definition 4
For \(G \subseteq Loc \), let Open image in new window . Let \(\mathfrak {S} \) be a class of schedulers. Then \(\mathrm {P}^{\mathfrak {S}}_\mathrm {\!min}(G) \) and \(\mathrm {P}^{\mathfrak {S}}_\mathrm {\!max}(G) \) are the minimum and maximum reachability probabilities for G under \(\mathfrak {S} \), defined as \(\mathrm {P}^{\mathfrak {S}}_\mathrm {\!min}(G) = \inf _{\mathfrak {s} \in \mathfrak {S}} \mathbb {P}_\mathfrak {s} (\varPi _{J _G})\) and \(\mathrm {P}^{\mathfrak {S}}_\mathrm {\!max}(G) = \sup _{\mathfrak {s} \in \mathfrak {S}} \mathbb {P}_\mathfrak {s} (\varPi _{J _G})\), respectively.
Semantics of Stochastic Automata. We present here the residual lifetimes semantics of [9], simplified for closed SA: any delay step must be of the minimum delay that makes some edge become enabled.
Definition 5
The second rule creates delay steps of t time units if no edge is enabled from now until just before t time units have elapsed (third premise) but then, after exactly t time units, some edge becomes enabled (second premise). The first rule applies if an edge \(\ell \xrightarrow {{{\scriptstyle {G, a, R}}}}_E \mu \) is enabled: a transition is taken with the edge’s label, the successor state’s location is chosen by \(\mu \), v is updated by resetting the clocks in R to zero, and the expiration times for the restarted clocks are resampled. All other expiration times remain unchanged. Notice that Open image in new window is also a nondeterministic labelled Markov process [40] (a proof can be found in [17]).
Example 2
Figure 2 outlines the semantics of \(M_0\). The first step from \(\ell _0\) to all the states in \(\ell _1\) is a single transition. Its probability measure is the product of F(x) and F(y), sampling the expiration times of the two clocks. We exemplify the behaviour of all of these states by showing it for the case of expiration times e(x) and e(y), with \(e(x) < e(y)\). In this case, to maximise the probability of reaching Open image in new window , we should take the transition to the state in \(\ell _2\). If a scheduler \(\mathfrak {s} \) can see the expiration times, noting that only their order matters here, it can always make the optimal choice and achieve Open image in new window .
3 Classes of Schedulers
We now define classes of schedulers for SA with restricted information, hiding in various combinations the history and parts of states such as clock values and expiration times. All definitions consider TPTS as in Definition 5 with states Open image in new window and we require for all \(\mathfrak {s}\) that Open image in new window , as in Definition 3.
3.1 Classic Schedulers
We first consider the “classic” completeinformation setting where schedulers can in particular see expiration times. We start with restricted classes of historydependent schedulers. Our first restriction hides the values of all clocks, only revealing the total time since the start of the history. This is inspired by the stepcounting or timetracking schedulers needed to obtain optimal stepbounded or timebounded reachability probabilities on MDP or Markov automata:
Definition 6
A classic historydependent globaltime scheduler is a measurable function \(\mathfrak {s} :(S _{\ell ,t,e} \times A ')^* \times S _{\ell ,t,e} \rightarrow \mathrm {Dist}({A ' \times \mathrm {Prob}({S})}) \), where Open image in new window with the second component being the total time t elapsed since the start of the history. We write \(\mathfrak {S} ^{ hist }_{{\ell ,t,e}}\) for the set of all such schedulers.
We next hide the values of all clocks, revealing only their expiration times:
Definition 7
A classic historydependent locationbased scheduler is a measurable function \(\mathfrak {s} :(S _{\ell ,e} \times A ')^* \times S _{\ell ,e} \rightarrow \mathrm {Dist}({A ' \times \mathrm {Prob}({S})}) \), where Open image in new window , with the second component being the clock expiration times e. We write \(\mathfrak {S} ^{ hist }_{{\ell ,e}}\) for the set of all such schedulers.
Having defined three classes of classic historydependent schedulers, \(\mathfrak {S} ^{ hist }_{{\ell ,v,e}}\), \(\mathfrak {S} ^{ hist }_{{\ell ,t,e}}\) and \(\mathfrak {S} ^{ hist }_{{\ell ,e}}\), noting that \(\mathfrak {S} ^{ hist }_{{\ell ,v,e}}\) denotes all schedulers of Definition 3, we also consider them with the restriction that they only see the relative order of clock expiration, instead of the exact expiration times: for each pair of clocks \(c_1,c_2\), these schedulers see the relation \(\sim \;\in \{<,=,>\}\) in \(e(c_1)  v(c_1) \sim e(c_2)  v(c_2)\). E.g. in \(\ell _1\) of Example 2, the scheduler would not see e(x) and e(y), but only whether \(e(x) < e(y)\) or viceversa (since \(v(x) = v(y) = 0\), and equality has probability 0 here). We consider this case because the expiration order is sufficient for the first algorithm of Bryans et al. [12], and would allow optimal decisions in \(M_0\) of Fig. 1. We denote the relative order information by o, and the corresponding scheduler classes by \(\mathfrak {S} ^{ hist }_{{\ell ,v,o}}\), \(\mathfrak {S} ^{ hist }_{\ell ,t,o}\) and \(\mathfrak {S} ^{ hist }_{{\ell ,o}}\). We now define memoryless schedulers, which only see the current state and are at the core of e.g. MDP model checking. On most formalisms, they suffice to obtain optimal reachability probabilities.
Definition 8
A classic memoryless scheduler is a measurable function \(\mathfrak {s} :S \rightarrow \mathrm {Dist}({A ' \times \mathrm {Prob}({S})}) \). We write \(\mathfrak {S} ^{ ml }_{{\ell ,v,e}}\) for the set of all such schedulers.
We apply the same restrictions as for historydependent schedulers:
Definition 9
A classic memoryless globaltime scheduler is a measurable function \(\mathfrak {s} :S _{\ell ,t,e} \rightarrow \mathrm {Dist}({A ' \times \mathrm {Prob}({S})}) \), with \(S _{\ell ,t,e}\) as in Definition 6. We write \(\mathfrak {S} ^{ ml }_{{\ell ,t,e}}\) for the set of all such schedulers.
Definition 10
A classic memoryless locationbased scheduler is a measurable function \(\mathfrak {s} :S _{\ell ,e} \rightarrow \mathrm {Dist}({A ' \times \mathrm {Prob}({S})}) \), with \(S _{\ell ,e}\) as in Definition 7. We write \(\mathfrak {S} ^{ ml }_{{\ell ,e}}\) for the set of all such schedulers.
Again, we also consider memoryless schedulers that only see the expiration order, so we have memoryless scheduler classes \(\mathfrak {S} ^{ ml }_{{\ell ,v,e}}\), \(\mathfrak {S} ^{ ml }_{\ell ,t,e}\), \(\mathfrak {S} ^{ ml }_{{\ell ,e}}\), \(\mathfrak {S} ^{ ml }_{{\ell ,v,o}}\), \(\mathfrak {S} ^{ ml }_{{\ell ,t,o}}\) and \(\mathfrak {S} ^{ ml }_{{\ell ,o}}\). Class \(\mathfrak {S} ^{ ml }_{{\ell ,o}}\) is particularly attractive because it has a compact finite domain.
3.2 Nonprophetic Schedulers
Consider the SA \(M_0\) in Fig. 1. No matter which of the previously defined scheduler classes we choose, we always find a scheduler that achieves probability 1 to reach Open image in new window , and a scheduler that achieves probability 0. This is because they can all see the expiration times or expiration order of x and y when in \(\ell _1\). When in \(\ell _1\), x and y have not yet expired—this will only happen later, in \(\ell _2\) or \(\ell _3\)—yet the schedulers already know which clock will “win”. The classic schedulers can thus be seen to make decisions based on the timing of future events. This prophetic scheduling has already been observed in [9], where a “fix” in the form of the spent lifetimes semantics was proposed. Hartmanns et al. [25] have shown that this not only still permits prophetic scheduling, but even admits divine scheduling, where a scheduler can change the future. The authors propose a complex nonprophetic semantics that provably removes all prophetic and divine behaviour.
Much of the complication of the nonprophetic semantics of [25] is due to it being specified for open SA that include delayable actions. For the closed SA setting of this paper, prophetic scheduling can be more easily excluded by hiding from the schedulers all information about what will happen in the future of the system’s evolution. This information is only contained in the expiration times e or the expiration order o. We can thus keep the semantics of Sect. 2 and modify the definition of schedulers to exclude prophetic behaviour by construction.
In what follows, we thus also consider all scheduler classes of Sect. 3.1 with the added constraint that the expiration times, resp. the expiration order, are not visible, resulting in the nonprophetic classes \(\mathfrak {S} ^{ hist }_{{\ell ,v}}\), \(\mathfrak {S} ^{ hist }_{{\ell ,t}}\), \(\mathfrak {S} ^{ hist }_{\ell }\), \(\mathfrak {S} ^{ ml }_{{\ell ,v}}\), \(\mathfrak {S} ^{ ml }_{{\ell ,t}}\) and \(\mathfrak {S} ^{ ml }_{\ell }\). Any nonprophetic scheduler can only reach Open image in new window of \(M_0\) with probability \(\frac{1}{2}\).
4 The Power of Schedulers
Now that we have defined a number of classes of schedulers, we need to determine what the effect of the restrictions is on our ability to optimally control an SA. We thus evaluate the power of scheduler classes w.r.t. unbounded reachability probabilities (Definition 4) on the semantics of SA. We will see that this simple setting already suffices to reveal interesting differences between scheduler classes.
4.1 The Classic Hierarchy
We first establish that all classic historydependent scheduler classes are equivalent:
Proposition 1
\(\mathfrak {S} ^{ hist }_{\ell ,v,e} \approx \mathfrak {S} ^{ hist }_{\ell ,t,e} \approx \mathfrak {S} ^{ hist }_{{\ell ,e}}\).
Proof
From the transition labels in \(A ' = A \uplus \mathbb {R}^+ \) in the history \((S ' \times A ')^*\), with \(S ' \in \{\, S, S _{\ell ,t,e}, S _{\ell ,e} \,\}\) depending on the scheduler class, we can reconstruct the total elapsed time as well as the values of all clocks: to obtain the total elapsed time, sum the labels in \(\mathbb {R}^+ \) up to each state; to obtain the values of all clocks, do the same per clock and perform the resets of the edges identified by the actions.
The same argument applies among the expirationorder historydependent classes:
Proposition 2
\(\mathfrak {S} ^{ hist }_{{\ell ,v,o}} \approx \mathfrak {S} ^{ hist }_{\ell ,t,o} \approx \mathfrak {S} ^{ hist }_{\ell ,o}\).
However, the expirationorder historydependent schedulers are strictly less powerful than the classic historydependent ones:
Proposition 3
\(\mathfrak {S} ^{ hist }_{\ell ,v,e} \succ \mathfrak {S} ^{ hist }_{\ell ,v,o}\).
Proof
Consider the SA \(M_1\) in Fig. 5. Note that the history does not provide any information for making the choice in \(\ell _1\): we always arrive after having spent zero time in \(\ell _0\) and then having taken the single edge to \(\ell _1\). We can analytically determine that \(\mathrm {P}^{}_\mathrm {\!max}(\mathfrak {S} ^{ hist }_{\ell ,v,e}) = \frac{3}{4}\) by going from \(\ell _1\) to \(\ell _2\) if \(e(x) \le \frac{1}{2}\) and to \(\ell _3\) otherwise. We would obtain a probability equal to \(\frac{1}{2}\) by always going to either \(\ell _2\) or \(\ell _3\) or by picking either edge with equal probability. This is the best we can do if e is not visible, and thus \(\mathrm {P}^{}_\mathrm {\!max}(\mathfrak {S} ^{ hist }_{\ell ,v,o}) = \frac{1}{2}\): in \(\ell _1\), \(v(x) = v(y) = 0\) and the expiration order is always “y before x” because y has not yet been started.
Just like for MDP and unbounded reachability probabilities, the classic historydependent and memoryless schedulers with complete information are equivalent:
Proposition 4
\(\mathfrak {S} ^{ hist }_{\ell ,v,e} \approx \mathfrak {S} ^{ ml }_{\ell ,v,e}\).
Proof sketch
Our definition of TPTS only allows finite nondeterministic choices, i.e. we have a very restricted form of continuousspace MDP. We can thus adapt the argument of the corresponding proof for MDP [5, Lemma 10.102]: For each state (of possibly countably many), we construct a notional optimal memoryless (and deterministic) scheduler in the same way, replacing the summation by an integration for the continuous measures in the transition function. It remains to show that this scheduler is indeed measurable. For TPTS that are the semantics of SA, this follows from the way clock values are used in the guard sets so that optimal decisions are constant over intervals of clock values and expiration times (see e.g. the arguments in [12] or [30]).
Proposition 5
\(\mathfrak {S} ^{ hist }_{\ell ,v,o} \succ \mathfrak {S} ^{ ml }_{\ell ,v,o}\).
Proof
Consider the SA \(M_2\) in Fig. 6. Let \(\mathfrak {s} ^{ opt }_{ ml (l,v,o)}\) be the (unknown) optimal scheduler in \(\mathfrak {S} ^{ ml }_{\ell ,v,o}\) w.r.t. the max. probability of reaching Open image in new window . Define \(\mathfrak {s} ^{ better }_{ hist (l,v,o)} \in \mathfrak {S} ^{ hist }_{\ell ,v,o}\) as: when in \(\ell _2\) and the last edge in the history is the left one (i.e. x is expired), go to \(\ell _3\); otherwise, behave like \(\mathfrak {s} ^{ opt }_{ ml (l,v,o)}\). This scheduler distinguishes \(\mathfrak {S} ^{ hist }_{{\ell ,v,o}}\) and \(\mathfrak {S} ^{ ml }_{{\ell ,v,o}}\) (by achieving a strictly higher max. probability than \(\mathfrak {s} ^{ opt }_{ ml (l,v,o)}\)) if and only if there are some combinations of clock values (aspect v) and expiration orders (aspect o) in \(\ell _2\) that can be reached with positive probability via the left edge into \(\ell _2\), for which \(\mathfrak {s} ^{ opt }_{ ml (l,v,o)}\) must nevertheless decide to go to \(\ell _4\).
All possible clock valuations in \(\ell _2\) can be achieved via either the left or the right edge, but taking the left edge implies that x expires before z in \(\ell _2\). It is thus sufficient to show that \(\mathfrak {s} ^{ opt }_{ ml (l,v,o)}\) must go to \(\ell _4\) in some cases where x expires before z. The general form of schedulers in \(\mathfrak {S} ^{ ml }_{\ell ,v,o}\) in \(\ell _2\) is “go to \(\ell _3\) iff (a) x expires before z and \(v(x) \in S_1\) or (b) z expires before x and \(v(x) \in S_2\)” where the \(S_i\) are measurable subsets of [0, 8]. \(S_2\) is in fact irrelevant: whatever \(\mathfrak {s} ^{ opt }_{ ml (l,v,o)}\) does when (b) is satisfied will be mimicked by \(\mathfrak {s} ^{ better }_{ hist (l,v,o)}\) because z can only expire before x when coming via the right edge into \(\ell _2\). Conditions (a) and (b) are independent.
With \(S_1 = [0, 8]\), the max. probability is \(\frac{77}{96} = 0.80208\bar{3}\). Since this is the only scheduler in \(\mathfrak {S} ^{ ml }_{\ell ,v,o}\) that is relevant for our proof and never goes to \(l_4\) when x expires before z, it remains to show that the max. probability under \(\mathfrak {s} ^{ opt }_{ ml (l,v,o)}\) is \(>\frac{77}{96}\). With \(S_1 = [0, \frac{35}{12})\), we have a max. probability of \(\frac{7561}{9216} \approx 0.820421\). Thus \(\mathfrak {s} ^{ opt }_{ ml (l,v,o)}\) must sometimes go to \(l_4\) even when the left edge was taken, so \(\mathfrak {s} ^{ better }_{ hist (l,v,o)}\) achieves a higher probability and thus distinguishes the classes.
Knowing only the global elapsed time is less powerful than knowing the full history or the values of all clocks:
Proposition 6
\(\mathfrak {S} ^{ hist }_{\ell ,t,e} \succ \mathfrak {S} ^{ ml }_{\ell ,t,e}\) and \(\mathfrak {S} ^{ ml }_{\ell ,v,e} \succ \mathfrak {S} ^{ ml }_{\ell ,t,e}\).
Proof sketch
Consider the SA \(M_3\) in Fig. 7. We have \(\mathrm {P}^{}_\mathrm {\!max}(\mathfrak {S} ^{ hist }_{\ell ,t,e}) = 1\): when in \(\ell _3\), the scheduler sees from the history which of the two incoming edges was used, and thus knows whether x or y is already expired. It can then make the optimal choice: go to \(\ell _4\) if x is already expired, or to \(\ell _5\) otherwise. We also have \(\mathrm {P}^{}_\mathrm {\!max}(\mathfrak {S} ^{ ml }_{\ell ,v,e}) = 1\): the scheduler sees that either \(v(x) = 0\) or \(v(y) = 0\), which implies that the other clock is already expired, and the argument above applies. However, \(\mathrm {P}^{}_\mathrm {\!max}(\mathfrak {S} ^{ ml }_{\ell ,t,e}) < 1\): the distribution of elapsed time t on entering \(\ell _3\) is itself independent of which edge is taken. With probability \(\frac{1}{4}\), exactly one of e(x) and e(y) is below t in \(\ell _3\), which implies that that clock has just expired and thus the scheduler can decide optimally. Yet with probability \(\frac{3}{4}\), the expiration times are not useful: they are both positive and drawn from the same distribution, but one unknown clock is expired. The wait for x in \(\ell _1\) ensures that comparing t with the expiration times in e does not reveal further information in this case.
Proposition 7
\(\mathfrak {S} ^{ ml }_{\ell ,t,e} \succ \mathfrak {S} ^{ ml }_{{\ell ,e}}\).
Proof
Consider SA \(M_4\) in Fig. 8. We have \(\mathrm {P}^{}_\mathrm {\!max}(\mathfrak {S} ^{ ml }_{\ell ,t,e}) = 1\): in \(\ell _2\), the remaining time until y expires is e(y) and the remaining time until x expires is \(e(x)  t\) for the global time value t as \(\ell _2\) is entered. The scheduler can observe all of these quantities and thus optimally go to \(\ell _3\) if x will expire first, or to \(\ell _4\) otherwise. However, \(\mathrm {P}^{}_\mathrm {\!max}(\mathfrak {S} ^{ ml }_{\ell ,e}) < 1\): e(x) only contains the absolute expiration time of x, but without knowing t or the expiration time of z in \(\ell _1\), and thus the current value v(x), this scheduler cannot know with certainty which of the clocks will expire first and is therefore unable to make an optimal choice in \(\ell _2\).
Finally, we need to compare the memoryless schedulers that see the clock expiration times with memoryless schedulers that see the expiration order. As noted in Sect. 3.1, these two views of the current state are incomparable unless we also see the clock values:
Proposition 8
\(\mathfrak {S} ^{ ml }_{\ell ,v,e} \succ \mathfrak {S} ^{ ml }_{\ell ,v,o}\).
Proof
\(\mathfrak {S} ^{ ml }_{\ell ,v,e} \not \preccurlyeq \mathfrak {S} ^{ ml }_{\ell ,v,o}\) follows from the same argument as in the proof of Proposition 3. \(\mathfrak {S} ^{ ml }_{\ell ,v,e} \succcurlyeq \mathfrak {S} ^{ ml }_{\ell ,v,o}\) is because knowing the current clock values v and the expiration times e is equivalent to knowing the expiration order, since that is precisely the order of the differences \(e(c)  v(c)\) for all clocks c.
Proposition 9
\(\mathfrak {S} ^{ ml }_{\ell ,t,e} \not \approx \mathfrak {S} ^{ ml }_{\ell ,t,o}\).
Proof
\(\mathfrak {S} ^{ ml }_{\ell ,t,e} \not \preccurlyeq \mathfrak {S} ^{ ml }_{\ell ,t,o}\) follows from the same argument as in the proof of Proposition 3. For \(\mathfrak {S} ^{ ml }_{\ell ,t,e} \not \succcurlyeq \mathfrak {S} ^{ ml }_{\ell ,t,o}\), consider the SA \(M_3\) of Fig. 7. We know from the proof of Proposition 6 that \(\mathrm {P}^{}_\mathrm {\!max}(\mathfrak {S} ^{ ml }_{\ell ,t,e}) < 1\). However, if the scheduler knows the order in which the clocks will expire, it knows which one has already expired (the first one in the order), and can thus make the optimal choice in \(\ell _3\) to achieve \(\mathrm {P}^{}_\mathrm {\!max}(\mathfrak {S} ^{ ml }_{\ell ,t,o}) = 1\).
Proposition 10
\(\mathfrak {S} ^{ ml }_{\ell ,e} \not \approx \mathfrak {S} ^{ ml }_{\ell ,o}\).
Proof
The argument of Proposition 9 applies by observing that, in \(M_3\) of Fig. 7, we also have \(\mathrm {P}^{}_\mathrm {\!max}(\mathfrak {S} ^{ ml }_{\ell ,e}) < 1\) via the same argument as for \(\mathfrak {S} ^{ ml }_{\ell ,t,e}\) in the proof of Proposition 6.
Among the expirationorder schedulers, the hierarchy is as expected:
Proposition 11
\(\mathfrak {S} ^{ ml }_{\ell ,v,o} \succ \mathfrak {S} ^{ ml }_{\ell ,t,o} \succ \mathfrak {S} ^{ ml }_{\ell ,o}\).
Proof sketch
Consider \(M_5\) of Fig. 9. To maximise the probability, in \(\ell _3\) we should go to \(\ell _4\) whenever x is already expired or close to expiring, for which the amount of time spent in \(\ell _2\) is an indicator. \(\mathfrak {S} ^{ ml }_{\ell ,o}\) only knows that x may have expired when the expiration order is “x before y”, but definitely has not expired when it is “y before x”. Schedulers in \(\mathfrak {S} ^{ ml }_{\ell ,t,o}\) can do better: They also see the amount of time spent in \(\ell _2\). Thus \(\mathfrak {S} ^{ ml }_{\ell ,t,o} \succ \mathfrak {S} ^{ ml }_{\ell ,o}\). If we modify \(M_5\) by adding an initial delay on x from a new \(\ell _0\) to \(\ell _1\) as in \(M_3\), then the same argument can be used to prove \(\mathfrak {S} ^{ ml }_{\ell ,v,o} \succ \mathfrak {S} ^{ ml }_{\ell ,t,o}\): the extra delay makes knowing the elapsed time t useless with positive probability, but the exact time spent in \(l_2\) is visible to \(\mathfrak {S} ^{ ml }_{\ell ,v,o}\) as v(x).
We have thus established the hierarchy of classic schedulers shown in Fig. 3, noting that some of the relationships follow from the propositions by transitivity.
4.2 The Nonprophetic Hierarchy
Each nonprophetic scheduler class is clearly dominated by the classic and expirationorder scheduler classes that otherwise have the same information, for example \(\mathfrak {S} ^{ hist }_{\ell ,v,e} \succ \mathfrak {S} ^{ hist }_{\ell ,v}\) (with very simple distinguishing SA). We show that the nonprophetic hierarchy follows the shape of the classic case, including the difference between globaltime and pure memoryless schedulers, with the notable exception of memoryless schedulers being weaker than historydependent ones.
Proposition 12
\(\mathfrak {S} ^{ hist }_{\ell ,v} \approx \mathfrak {S} ^{ hist }_{\ell ,t} \approx \mathfrak {S} ^{ hist }_{\ell }\).
Proof
This follows from the argument of Proposition 1.
Proposition 13
\(\mathfrak {S} ^{ hist }_{\ell ,v} \succ \mathfrak {S} ^{ ml }_{\ell ,v}\).
Proof
Consider the SA \(M_6\) in Fig. 10. It is similar to \(M_4\) of Fig. 8, and our arguments are thus similar to the proof of Proposition 7. On \(M_6\), we have \(\mathrm {P}^{}_\mathrm {\!max}(\mathfrak {S} ^{ hist }_{\ell ,v}) = 1\): in \(\ell _2\), the history reveals which of the two incoming edges was used, i.e. which clock is already expired, thus the scheduler can make the optimal choice. However, if neither the history nor e is available, we get \(\mathrm {P}^{}_\mathrm {\!max}(\mathfrak {S} ^{ ml }_{\ell ,v}) = \frac{1}{2}\): the only information that can be used in \(\ell _2\) are the values of the clocks, but \(v(x) = v(y)\), so there is no basis for an informed choice.
Proposition 14
\(\mathfrak {S} ^{ hist }_{\ell ,t} \succ \mathfrak {S} ^{ ml }_{\ell ,t}\) and \(\mathfrak {S} ^{ ml }_{\ell ,v} \succ \mathfrak {S} ^{ ml }_{\ell ,t}\).
Proof
Consider the SA \(M_3\) in Fig. 7. We have \(\mathrm {P}^{}_\mathrm {\!max}(\mathfrak {S} ^{ hist }_{\ell ,t}) = \mathrm {P}^{}_\mathrm {\!max}(\mathfrak {S} ^{ ml }_{\ell ,v}) = 1\), but \(\mathrm {P}^{}_\mathrm {\!max}(\mathfrak {S} ^{ ml }_{\ell ,t}) = \frac{1}{2}\) by the same arguments as in the proof of Proposition 6.
Proposition 15
\(\mathfrak {S} ^{ ml }_{\ell ,t} \succ \mathfrak {S} ^{ ml }_{\ell }\).
Proof
Consider the SA \(M_4\) in Fig. 8. The schedulers in \(\mathfrak {S} ^{ ml }_{\ell }\) have no information but the current location, so they cannot make an informed choice in \(\ell _2\). This and the simple loopfree structure of \(M_4\) make it possible to analytically calculate the resulting probability: \(\mathrm {P}^{}_\mathrm {\!max}(\mathfrak {S} ^{ ml }_{\ell }) = \frac{17}{24} = 0.708\overline{3}\). If information about the global elapsed time t in \(\ell _2\) is available, however, the value of x is revealed. This allows making a better choice, e.g. going to \(\ell _3\) when \(t \le \frac{1}{2}\) and to \(\ell _4\) otherwise, resulting in \(\mathrm {P}^{}_\mathrm {\!max}(\mathfrak {S} ^{ ml }_{\ell ,t}) \approx 0.771\) (statistically estimated with high confidence).
We have thus established the hierarchy of nonprophetic schedulers shown in Fig. 4, where some relationships follow from the propositions by transitivity.
5 Experiments
We have built a prototype implementation of lightweight scheduler sampling for SA by extending the Modest Toolset ’s [24] modes simulator, which already supports deterministic stochastic timed automata (STA [8]). With some care, SA can be encoded into STA. Using the original algorithm for MDP of [18], our prototype works by providing to the schedulers a discretised view of the continuous components of the SA’s semantics, which, we recall, is a continuousspace MDP. The currently implemented discretisation is simple: for each realvalued quantity (the value v(c) of clock c, its expiration time e(c), and the global elapsed time t), it identifies all values that lie within the same interval \([\frac{i}{n}, \frac{i+1}{n})\), for integers i, n. We note that better static discretisations are almost certainly possible, e.g. a region construction for the clock values as in [30].

\(\{1,2,4\}\): Fine discretisation is not important for optimality and optimal schedulers are not rare.

\(\{1,2\}\): Fine discretisation is not important for optimality, but increases rarity of optimal schedulers.

\(\{2,4\}\): Fine discretisation is important for optimality, optimal schedulers are not rare.

\(\{1\}\): Optimal schedulers are very rare.

\(\{2\}\): Fine discretisation is important for optimality, but increases rarity of schedulers.

\(\{4\}\): Fine discretisation is important for optimality and optimal schedulers are not rare.
The results in Fig. 11 respect and differentiate our hierarchy. In most cases, we found schedulers whose estimates were within the statistical error of calculated optima or of high confidence estimates achieved by alternative statistical techniques. The exceptions involve \(M_3\) and \(M_4\). We note that \(M_4\) makes use of an additional clock, increasing the dimensionality of the problem and potentially making nearoptimal schedulers rarer. The best result for \(M_3\) and class \(\mathfrak {S} ^{ ml }_{l,t,e}\) was obtained using discretisation factor \(n=2\): a compromise between nearness to optimality and rarity. A greater compromise was necessary for \(M_4\) and classes \(\mathfrak {S} ^{ ml }_{l,t,e},\mathfrak {S} ^{ ml }_{l,e}\), where we found nearoptimal schedulers to be very rare and achieved best results using discretisation factor \(n=1\).
The experiments demonstrate that lightweight scheduler sampling can produce useful and informative results with SA. The present theoretical results will allow us to develop better abstractions for SA and thus to construct a refinement algorithm for efficient lightweight verification of SA that will be applicable to realistically sized case studies. As is, they already demonstrate the importance of selecting a proper scheduler class for efficient verification, and that restricted classes are useful in planning scenarios.
6 Conclusion
We have shown that the various notions of information available to a scheduler class, such as history, clock order, expiration times or overall elapsed time, almost all make distinct contributions to the power of the class in SA. Our choice of notions was based on classic scheduler classes relevant for other stochastic models, previous literature on the character of nondeterminism in and verification of SA, and the need to synthesise simple schedulers in planning. Our distinguishing examples clearly expose how to exploit each notion to improve the probability of reaching a goal. For verification of SA, we have demonstrated the feasibility of lightweight scheduler sampling, where the different notions may be used to finely control the power of the lightweight schedulers. To solve stochastic timed planning problems defined via SA, our analysis helps in the casebycase selection of an appropriate scheduler class that achieves the desired tradeoff between optimal probabilities and ease of implementation of the resulting plan.
We expect the arguments of this paper to extend to steadystate/frequency measures (by adding loops back from absorbing to initial states in our examples), and that our results for classic schedulers transfer to SA with delayable actions. We propose to use the results to develop better abstractions for SA, the next goal being a refinement algorithm for efficient lightweight verification of SA.
References
 1.de Alfaro, L.: The verification of probabilistic systems under memoryless partialinformation policies is hard. Technical report, DTIC Document (1999)Google Scholar
 2.Alur, R., Courcoubetis, C., Dill, D.: Modelchecking for probabilistic realtime systems. In: Albert, J.L., Monien, B., Artalejo, M.R. (eds.) ICALP 1991. LNCS, vol. 510, pp. 115–126. Springer, Heidelberg (1991). https://doi.org/10.1007/3540542337_128CrossRefGoogle Scholar
 3.Andel, T.R., Yasinsac, A.: On the credibility of MANET simulations. IEEE Comput. 39(7), 48–54 (2006)CrossRefGoogle Scholar
 4.Avritzer, A., Carnevali, L., Ghasemieh, H., Happe, L., Haverkort, B.R., Koziolek, A., Menasché, D.S., Remke, A., Sarvestani, S.S., Vicario, E.: Survivability evaluation of gas, water and electricity infrastructures. Electr. Notes Theor. Comput. Sci. 310, 5–25 (2015)CrossRefGoogle Scholar
 5.Baier, C., Katoen, J.P.: Principles of Model Checking. MIT Press, Cambridge (2008)MATHGoogle Scholar
 6.Ballarini, P., Bertrand, N., Horváth, A., Paolieri, M., Vicario, E.: Transient analysis of networks of stochastic timed automata using stochastic state classes. In: Joshi, K., Siegle, M., Stoelinga, M., D’Argenio, P.R. (eds.) QEST 2013. LNCS, vol. 8054, pp. 355–371. Springer, Heidelberg (2013). https://doi.org/10.1007/9783642401961_30CrossRefGoogle Scholar
 7.Bisgaard, M., Gerhardt, D., Hermanns, H., Krčál, J., Nies, G., Stenger, M.: Batteryaware scheduling in low orbit: the GomX–3 case. In: Fitzgerald, J., Heitmeyer, C., Gnesi, S., Philippou, A. (eds.) FM 2016. LNCS, vol. 9995, pp. 559–576. Springer, Cham (2016). https://doi.org/10.1007/9783319489896_34CrossRefGoogle Scholar
 8.Bohnenkamp, H.C., D’Argenio, P.R., Hermanns, H., Katoen, J.P.: MoDeST: a compositional modeling formalism for hard and softly timed systems. IEEE Trans. Softw. Eng. 32(10), 812–830 (2006)CrossRefGoogle Scholar
 9.Bravetti, M., D’Argenio, P.R.: Tutte le algebre insieme: concepts, discussions and relations of stochastic process algebras with general distributions. In: Baier, C., Haverkort, B.R., Hermanns, H., Katoen, J.P., Siegle, M. (eds.) Validation of Stochastic Systems. LNCS, vol. 2925, pp. 44–88. Springer, Heidelberg (2004). https://doi.org/10.1007/9783540246114_2CrossRefMATHGoogle Scholar
 10.Bravetti, M., Gorrieri, R.: The theory of interactive generalized semiMarkov processes. Theor. Comput. Sci. 282(1), 5–32 (2002)MathSciNetCrossRefGoogle Scholar
 11.Brázdil, T., Krčál, J., Křetínský, J., Řehák, V.: Fixeddelay events in generalized semiMarkov processes revisited. In: Katoen, J.P., König, B. (eds.) CONCUR 2011. LNCS, vol. 6901, pp. 140–155. Springer, Heidelberg (2011). https://doi.org/10.1007/9783642232176_10CrossRefGoogle Scholar
 12.Bryans, J., Bowman, H., Derrick, J.: Model checking stochastic automata. ACM Trans. Comput. Log. 4(4), 452–492 (2003)MathSciNetCrossRefGoogle Scholar
 13.Buchholz, P., Kriege, J., Scheftelowitsch, D.: Model checking stochastic automata for dependability and performance measures. In: DSN, pp. 503–514. IEEE Computer Society (2014)Google Scholar
 14.Butkova, Y., Hatefi, H., Hermanns, H., Krčál, J.: Optimal continuous time Markov decisions. In: Finkbeiner, B., Pu, G., Zhang, L. (eds.) ATVA 2015. LNCS, vol. 9364, pp. 166–182. Springer, Cham (2015). https://doi.org/10.1007/9783319249537_12CrossRefGoogle Scholar
 15.D’Argenio, P.R., Hartmanns, A., Legay, A., Sedwards, S.: Statistical approximation of optimal schedulers for probabilistic timed automata. In: Ábrahám, E., Huisman, M. (eds.) IFM 2016. LNCS, vol. 9681, pp. 99–114. Springer, Cham (2016). https://doi.org/10.1007/9783319336930_7CrossRefGoogle Scholar
 16.D’Argenio, P.R., Katoen, J.P.: A theory of stochastic systems part I: stochastic automata. Inf. Comput. 203(1), 1–38 (2005)CrossRefGoogle Scholar
 17.D’Argenio, P.R., Lee, M.D., Monti, R.E.: Input/output stochastic automata. In: Fränzle, M., Markey, N. (eds.) FORMATS 2016. LNCS, vol. 9884, pp. 53–68. Springer, Cham (2016). https://doi.org/10.1007/9783319448787_4CrossRefMATHGoogle Scholar
 18.D’Argenio, P.R., Legay, A., Sedwards, S., Traonouez, L.M.: Smart sampling for lightweight verification of Markov decision processes. STTT 17(4), 469–484 (2015)CrossRefGoogle Scholar
 19.Eisentraut, C., Hermanns, H., Zhang, L.: On probabilistic automata in continuous time. In: LICS, pp. 342–351. IEEE Computer Society (2010)Google Scholar
 20.Giro, S., D’Argenio, P.R.: Quantitative model checking revisited: neither decidable nor approximable. In: Raskin, J.F., Thiagarajan, P.S. (eds.) FORMATS 2007. LNCS, vol. 4763, pp. 179–194. Springer, Heidelberg (2007). https://doi.org/10.1007/9783540754541_14CrossRefGoogle Scholar
 21.Haas, P.J., Shedler, G.S.: Regenerative generalized semiMarkov processes. commun. stat. Stochast. Models 3(3), 409–438 (1987)MathSciNetCrossRefGoogle Scholar
 22.Hahn, E.M., Hartmanns, A., Hermanns, H.: Reachability and reward checking for stochastic timed automata. In: Electronic Communications of the EASST, AVoCS 2014, vol. 70 (2014)Google Scholar
 23.Harrison, P.G., Strulo, B.: SPADES  a process algebra for discrete event simulation. J. Log. Comput. 10(1), 3–42 (2000)MathSciNetCrossRefGoogle Scholar
 24.Hartmanns, A., Hermanns, H.: The Modest Toolset: an integrated environment for quantitative modelling and verification. In: Ábrahám, E., Havelund, K. (eds.) TACAS 2014. LNCS, vol. 8413, pp. 593–598. Springer, Heidelberg (2014). https://doi.org/10.1007/9783642548628_51CrossRefGoogle Scholar
 25.Hartmanns, A., Hermanns, H., Krčál, J.: Schedulers are no Prophets. In: Probst, C.W., Hankin, C., Hansen, R.R. (eds.) Semantics, Logics, and Calculi. LNCS, vol. 9560, pp. 214–235. Springer, Cham (2016). https://doi.org/10.1007/9783319278100_11CrossRefGoogle Scholar
 26.Hartmanns, A., Sedwards, S., D’Argenio, P.: Efficient simulationbased verification of probabilistic timed automata. In: WSC. IEEE (2017). https://doi.org/10.1109/WSC.2017.8247885
 27.Hermanns, H.: Interactive Markov Chains: The Quest for Quantified Quality. LNCS, vol. 2428. Springer, Heidelberg (2002). https://doi.org/10.1007/3540458042CrossRefMATHGoogle Scholar
 28.Hermanns, H., Krämer, J., Krčál, J., Stoelinga, M.: The value of attackdefence diagrams. In: Piessens, F., Viganò, L. (eds.) POST 2016. LNCS, vol. 9635, pp. 163–185. Springer, Heidelberg (2016). https://doi.org/10.1007/9783662496350_9CrossRefGoogle Scholar
 29.Kurkowski, S., Camp, T., Colagrosso, M.: MANET simulation studies: the incredibles. Mob. Comput. Commun. Rev. 9(4), 50–61 (2005)CrossRefGoogle Scholar
 30.Kwiatkowska, M., Norman, G., Segala, R., Sproston, J.: Verifying quantitative properties of continuous probabilistic timed automata. In: Palamidessi, C. (ed.) CONCUR 2000. LNCS, vol. 1877, pp. 123–137. Springer, Heidelberg (2000). https://doi.org/10.1007/3540446184_11CrossRefGoogle Scholar
 31.Legay, A., Sedwards, S., Traonouez, L.M.: Estimating rewards & rare events in nondeterministic systems. In: Electronic Communications of the EASST, AVoCS 2015, vol. 72 (2015)Google Scholar
 32.Legay, A., Sedwards, S., Traonouez, L.M.: Scalable verification of Markov decision processes. In: Canal, C., Idani, A. (eds.) SEFM 2014. LNCS, vol. 8938, pp. 350–362. Springer, Cham (2015). https://doi.org/10.1007/9783319152011_23CrossRefGoogle Scholar
 33.Matthes, K.: Zur Theorie der Bedienungsprozesse. In: 3rd Prague Conference on Information Theory, Stat. Dec. Fns. and Random Processes, pp. 513–528 (1962)Google Scholar
 34.NS3 Consortium: ns3: A Discreteevent Network Simulator for Internet Systems. https://www.nsnam.org/
 35.Pongor, G.: OMNeT: objective modular network testbed. In: MASCOTS, pp. 323–326. The Society for Computer Simulation (1993)Google Scholar
 36.Ruijters, E., Stoelinga, M.: Better railway engineering through statistical model checking. In: Margaria, T., Steffen, B. (eds.) ISoLA 2016. LNCS, vol. 9952, pp. 151–165. Springer, Cham (2016). https://doi.org/10.1007/9783319471662_10CrossRefGoogle Scholar
 37.Song, L., Zhang, L., Godskesen, J.C.: Late weak bisimulation for Markov automata. CoRR abs/1202.4116 (2012)Google Scholar
 38.Strulo, B.: Process algebra for discrete event simulation. Ph.D. thesis, Imperial College of Science, Technology and Medicine. University of London, October 1993Google Scholar
 39.Wolf, V., Baier, C., MajsterCederbaum, M.E.: Trace semantics for stochastic systems with nondeterminism. Electr. Notes Theor. Comput. Sci. 164(3), 187–204 (2006)CrossRefGoogle Scholar
 40.Wolovick, N.: Continuous probability and nondeterminism in labeled transition systems. Ph.D. thesis, Universidad Nacional de Córdoba, Córdoba, Argentina (2012)Google Scholar
 41.Wolovick, N., Johr, S.: A characterization of meaningful schedulers for continuoustime Markov decision processes. In: Asarin, E., Bouyer, P. (eds.) FORMATS 2006. LNCS, vol. 4202, pp. 352–367. Springer, Heidelberg (2006). https://doi.org/10.1007/11867340_25CrossRefMATHGoogle Scholar
 42.Zeng, X., Bagrodia, R.L., Gerla, M.: Glomosim: a library for parallel simulation of largescale wireless networks. In: PADS, pp. 154–161. IEEE Computer Society (1998)Google Scholar
Copyright information
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this book are included in the book's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the book's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.