Formal Methods in System Design

, Volume 42, Issue 3, pp 301–327 | Cite as

Symbolic algorithms for qualitative analysis of Markov decision processes with Büchi objectives

  • Krishnendu Chatterjee
  • Monika Henzinger
  • Manas Joglekar
  • Nisarg Shah


We consider Markov decision processes (MDPs) with Büchi (liveness) objectives. We consider the problem of computing the set of almost-sure winning states from where the objective can be ensured with probability 1. Our contributions are as follows: First, we present the first subquadratic symbolic algorithm to compute the almost-sure winning set for MDPs with Büchi objectives; our algorithm takes \(O(n \cdot\sqrt{m})\) symbolic steps as compared to the previous known algorithm that takes O(n 2) symbolic steps, where n is the number of states and m is the number of edges of the MDP. In practice MDPs have constant out-degree, and then our symbolic algorithm takes \(O(n \cdot\sqrt{n})\) symbolic steps, as compared to the previous known O(n 2) symbolic steps algorithm. Second, we present a new algorithm, namely win-lose algorithm, with the following two properties: (a) the algorithm iteratively computes subsets of the almost-sure winning set and its complement, as compared to all previous algorithms that discover the almost-sure winning set upon termination; and (b) requires \(O(n \cdot\sqrt{K})\) symbolic steps, where K is the maximal number of edges of strongly connected components (scc’s) of the MDP. The win-lose algorithm requires symbolic computation of scc’s. Third, we improve the algorithm for symbolic scc computation; the previous known algorithm takes linear symbolic steps, and our new algorithm improves the constants associated with the linear number of steps. In the worst case the previous known algorithm takes 5⋅n symbolic steps, whereas our new algorithm takes 4⋅n symbolic steps.


Markov decision processes Probabilistic verification Büchi objectives Symbolic algorithms 



We thank Fabio Somenzi for sharing the facts about the performance comparison of the algorithm of [3] and the algorithm of [12]. We thank anonymous reviewers for many helpful comments that improved the presentation of the paper.


  1. 1.
    Barnat J, Chaloupka J, van de Pol J (2011) Distributed algorithms for SCC decomposition. J Log Comput 21(1):23–44 MATHCrossRefGoogle Scholar
  2. 2.
    Bianco A, de Alfaro L (1995) Model checking of probabilistic and nondeterministic systems. In: FSTTCS 95. LNCS, vol 1026. Springer, Berlin, pp 499–513 Google Scholar
  3. 3.
    Bloem R, Gabow HN, Somenzi F (2000) An algorithm for strongly connected component analysis in log symbolic steps. In: FMCAD, pp 37–54 Google Scholar
  4. 4.
    Chatterjee K, Henzinger M (2011) Faster and dynamic algorithms for maximal end-component decomposition and related graph problems in probabilistic verification. In: SODA, pp 1318–1336 Google Scholar
  5. 5.
    Chatterjee K, Jurdziński M, Henzinger TA (2003) Simple stochastic parity games. In: CSL’03. LNCS, vol 2803. Springer, Berlin, pp 100–113 Google Scholar
  6. 6.
    Chatterjee K, Jurdziński M, Henzinger TA (2004) Quantitative stochastic parity games. In: SODA’04. SIAM, Philadelphia, pp 121–130 Google Scholar
  7. 7.
    Courcoubetis C, Yannakakis M (1995) The complexity of probabilistic verification. J ACM 42(4):857–907 MathSciNetMATHCrossRefGoogle Scholar
  8. 8.
    de Alfaro L (1997) Formal verification of probabilistic systems. PhD thesis, Stanford University Google Scholar
  9. 9.
    de Alfaro L, Faella M, Majumdar R, Raman V (2005) Code-aware resource management. In: EMSOFT 05. ACM, New York Google Scholar
  10. 10.
    de Alfaro L, Roy P (2007) Magnifying-lens abstraction for Markov decision processes. In: CAV, pp 325–338 Google Scholar
  11. 11.
    Filar J, Vrieze K (1997) Competitive Markov decision processes. Springer, Berlin MATHGoogle Scholar
  12. 12.
    Gentilini R, Piazza C, Policriti A (2003) Computing strongly connected components in a linear number of symbolic steps. In: SODA, pp 573–582 Google Scholar
  13. 13.
    Howard H (1960) Dynamic programming and Markov processes. MIT Press, Cambridge MATHGoogle Scholar
  14. 14.
    Immerman N (1981) Number of quantifiers is better than number of tape cells. J Comput Syst Sci 22:384–406 MathSciNetMATHCrossRefGoogle Scholar
  15. 15.
    Kemeny JG, Snell JL, Knapp AW (1966) Denumerable Markov chains. Van Nostrand, Princeton MATHGoogle Scholar
  16. 16.
    Kwiatkowska M, Norman G, Parker D (2000) Verifying randomized distributed algorithms with prism. In: Workshop on advances in verification (WAVE’00) Google Scholar
  17. 17.
    Pogosyants A, Segala R, Lynch N (2000) Verification of the randomized consensus algorithm of Aspnes and Herlihy: a case study. Distrib Comput 13(3):155–186 CrossRefGoogle Scholar
  18. 18.
    Puterman ML (1994) Markov decision processes. Wiley, New York MATHCrossRefGoogle Scholar
  19. 19.
    Segala R (1995) Modeling and verification of randomized distributed real-time systems. PhD thesis, MIT Technical Report MIT/LCS/TR-676 Google Scholar
  20. 20.
    Sleator DD, Tarjan RE (1983) A data structure for dynamic trees. J Comput Syst Sci 26(3):362–391 MathSciNetMATHCrossRefGoogle Scholar
  21. 21.
    Somenzi F. Personal communication Google Scholar
  22. 22.
    Somenzi F (1998) Colorado university decision diagram package.
  23. 23.
    Stoelinga MIA (2002) Fun with FireWire: experiments with verifying the IEEE1394 root contention protocol. In: Formal aspects of computing Google Scholar
  24. 24.
    Thomas W (1997) Languages, automata, and logic. In: Rozenberg G, Salomaa A (eds) Beyond words. Handbook of formal languages, vol 3. Springer, Berlin, pp 389–455. Chap 7 Google Scholar

Copyright information

© Springer Science+Business Media New York 2012

Authors and Affiliations

  • Krishnendu Chatterjee
    • 1
  • Monika Henzinger
    • 2
  • Manas Joglekar
    • 3
  • Nisarg Shah
    • 4
  1. 1.IST AustriaKlosterneuburgAustria
  2. 2.University of ViennaViennaAustria
  3. 3.Stanford UniversityPalo AltoUSA
  4. 4.Carnegie Mellon UniversityPittsburghUSA

Personalised recommendations