Abstract
A continuous-time Markov decision process (CTMDP) is a generalization of a continuous-time Markov chain in which both probabilistic and nondeterministic choices co-exist. This paper presents an efficient algorithm to compute the maximum (or minimum) probability to reach a set of goal states within a given time bound in a uniform CTMDP, i.e., a CTMDP in which the delay time distribution per state visit is the same for all states. We prove that these probabilities coincide for (time-abstract) history-dependent and Markovian schedulers that resolve nondeterminism either deterministically or in a randomized way.
This work is supported by the NWO-DFG bilateral project Validation of Stochastic Systems.
Chapter PDF
Similar content being viewed by others
References
Ajmone Marsan, M., Balbo, G., Conte, G., Donatelli, S., Franceschinis, G.: Modelling with Generalized Stochastic Petri Nets. John Wiley & Sons, Chichester (1995)
Aziz, A., Sanwal, K., Singhal, V., Brayton, R.: Model checking continuous time Markov chains. ACM TOCL 1(1), 162–170 (2000)
Baier, C., Haverkort, B., Hermanns, H., Katoen, J.-P.: Efficient computation of time-bounded reachability probabilities in uniform continuous-time Markov decision processes. CTIT Technical Report 03-50, University of Twente (2003)
Baier, C., Haverkort, B., Hermanns, H., Katoen, J.-P.: Model-checking algorithms for continuous-time Markov chains. IEEE Transactions on Software Engineering 29(6), 524–541 (2003)
Baier, C., Kwiatkowska, M.Z.: Model checking for a probabilistic branching time logic with fairness. Distributed Computing 11, 125–155 (1998)
Bertsekas, D.P.: Dynamic Programming and Optimal Control, vol. II. Athena Scientific, Belmont (1995)
Beutler, F., Ross, K.W.: Uniformization for semi-Markov decision processes under stationary policies. Journal of Applied Probability 24, 644–656 (1987)
Bianco, A., de Alfaro, L.: Model checking of probabilistic and nondeterministic systems. In: Thiagarajan, P.S. (ed.) FSTTCS 1995. LNCS, vol. 1026, pp. 499–513. Springer, Heidelberg (1995)
Buchholz, P.: Exact and ordinary lumpability in finite Markov chains. Journal of Applied Probability 31, 59–75 (1994)
Buchholz, P., Katoen, J.-P., Kemper, P., Tepper, C.: Model-checking large structured Markov chains. Journal of Logic and Algebraic Programming 56, 69–97 (2003)
Chiola, G., Donatelli, S., Franceschinis, G.: GSPNs versus SPNs: What is the actual role of immediate transitions? In: Petri Nets and Performance Models 1991, IEEE CS Press, Los Alamitos (1991)
Ciardo, G., Zijal, R.: Well-defined stochastic Petri nets. In: Proc. 4th International Workshop on Modeling, Analysis and Simulation of Computer and Telecommunication Systems (MASCOTS 1996), pp. 278–284 (1996)
Deavours, D.D., Sanders, W.H.: An efficient well-specified check. In: Petri Nets and Performance Models, IEEE CS Press, Los Alamitos (1999)
Feinberg, E.A.: Continuous time discounted jump Markov decision processes: a discrete-event approach (1998), http://www.ams.sunysb.edu/~feinberg/public/
Fox, B.L., Glynn, P.W.: Computing Poisson probabilities. Communications of the ACM 31(4), 440–445 (1988)
Hermanns, H., Herzog, U., Mertsiotakis, V.: Stochastic process algebras – Between LOTOS and Markov chains. Computer Networks and ISDN Systems 30(9-10), 901–924 (1998)
Hermanns, H.: Interactive Markov Chains. LNCS, vol. 2428. Springer, Heidelberg (2002)
Hermanns, H., Katoen, J.-P., Meyer-Kayser, J., Siegle, M.: A tool for modelchecking Markov chains. Int. Journal on Software Tools for Technology Transfer 4(2), 153–172 (2003)
Jensen, A.: Markov chains as an aid in the study of Markov processes. Skand. Aktuarietidskrift 3, 87–91 (1953)
Katoen, J.-P., Kwiatkowska, M.Z., Norman, G., Parker, D.: Faster and symbolic CTMC model checking. In: de Luca, L., Gilmore, S. (eds.) PROBMIV 2001, PAPM-PROBMIV 2001, and PAPM 2001. LNCS, vol. 2165, pp. 23–38. Springer, Heidelberg (2001)
Puterman, M.L.: Markov Decision Processes: Discrete Stochastic Dynamic Programming. John Wiley & Sons, Chichester (1994)
Qiu, Q., Pedram, M.: Dynamic power managment based on continuous-time Markov decision processes. In: Design Automation Conference 1999, pp. 555–561 (1999)
Sanders, W.H., Meyer, J.F.: Reduced base model construction methods for stochastic activity networks. IEEE Journal on Selected Areas in Communications 9(1), 25–36 (1991)
Sennot, L.: Stochastic Dynamic Programming and the Control of Queueing Systems. John Wiley & Sons, Chichester (1999)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2004 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Baier, C., Haverkort, B., Hermanns, H., Katoen, JP. (2004). Efficient Computation of Time-Bounded Reachability Probabilities in Uniform Continuous-Time Markov Decision Processes. In: Jensen, K., Podelski, A. (eds) Tools and Algorithms for the Construction and Analysis of Systems. TACAS 2004. Lecture Notes in Computer Science, vol 2988. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-24730-2_5
Download citation
DOI: https://doi.org/10.1007/978-3-540-24730-2_5
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-21299-7
Online ISBN: 978-3-540-24730-2
eBook Packages: Springer Book Archive