Abstract
An important problem in the implementation of Markov Chain Monte Carlo algorithms is to determine the convergence time, or the number of iterations before the chain is close to stationarity. For many Markov chains used in practice this time is not known. There does not seem to be a general technique for upper bounding the convergence time that gives sufficiently sharp (useful in practice) bounds in all cases of interest. Thus, practitioners like to carry out some form of statistical analysis in order to assess convergence. This has led to the development of a number of methods known as convergence diagnostics which attempt to diagnose whether the Markov chain is far from stationarity. We study the problem of testing convergence in the following settings and prove that the problem is hard computationally:
-
Given a Markov chain that mixes rapidly, it is hard for Statistical Zero Knowledge (SZK-hard) to distinguish whether starting from a given state, the chain is close to stationarity by time t or far from stationarity at time ct for a constant c. We show the problem is in AM ∩ coAM.
-
Given a Markov chain that mixes rapidly it is coNP-hard to distinguish from an arbitrary starting state whether it is close to stationarity by time t or far from stationarity at time ct for a constant c. The problem is in coAM.
-
It is PSPACE-complete to distinguish whether the Markov chain is close to stationarity by time t or still far from stationarity at time ct for c ≥ 1.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Aldous, D., Fill, J.: Reversible Markov chains and random walks on graphs. Draft, http://www.stat.Berkeley.edu/users/aldous
Asmussen, S., Glynn, P.W., Thorisson, H.: Stationarity detection in the initial transient problem. ACM Transactions on Modeling and Computer Simulation 2(2), 130–157 (1992)
Bhatnagar, N., Bogdanov, A., Mossel, E.: The complexity of estimating MCMC convergence time (2010), http://arxiv.org/abs/1007.0089
BOA, Bayesian Output Analysis, http://www.public-health.uiowa.edu/BOA
Brooks, S., Roberts, G.: Assessing convergence of Markov Chain Monte Carlo algorithms. Statistics and Computing 8, 319–335 (1998)
Carlin, B., Louis, T.: Bayes and Empirical Bayes methods for data analysis. Chapman and Hall, Boca Raton (2000)
Cowles, M., Carlin, B.: Markov Chain Monte Carlo Convergence Diagnostics: A Comparative Review. J. Am. Stat. Assoc. 91(434), 883–904 (1996)
Gilks, W., Richardson, S., Spiegelhalter, D. (eds.): Monte Carlo Statistical Methods. Chapman and Hall, Boca Raton (1995)
Derrida, B., Weisbuch, G.: Dynamical phase transitions in 3-dimensional spin glasses. Europhys. Lett. 4(6), 657–662 (1987)
Goldwasser, S., Sipser, M.: Private Coins versus Public Coins in Interactive Proof Systems. In: Micali, S. (ed.) Advances in Computing Research: a Research Annual, Randomness and Computation, vol. 5, pp. 73–90 (1989)
Holenstein, T., Renner, R.S.: One-way secret-key agreement and applications to circuit polarization and immunization of public-key encryption. In: Shoup, V. (ed.) CRYPTO 2005. LNCS, vol. 3621, pp. 478–493. Springer, Heidelberg (2005)
Ntzoufras, I.: Bayesian Modeling Using WinBUGS. Wiley, Chichester (2009)
Jerrum, M.: Counting, Sampling and Integrating: Algorithms and Complexity, Birkhäuser, Basel (2003)
Jerrum, M., Sinclair, A.: Polynomial-time Approximation Algorithms for the Ising Model. SIAM Journal on Computing 22, 1087–1116 (1993)
Jerrum, M., Sinclair, A., Vigoda, E.: A Polynomial-time Approximation Algorithm for the Permanent of a Matrix with Non-negative Entries. Journal of the ACM 51(4), 671–697 (2004)
Levin, D., Peres, Y., Wilmer, E.: Markov Chains and Mixing Times (2008)
Lovász, L., Vempala, S.: Simulated Annealing in Convex Bodies and an O*(n 4) Volume Algorithm. In: Proc. of the 44th IEEE Symposium on Foundations of Computer Science (2003)
Lovász, L., Vempala, S.: Fast Algorithms for Logconcave Functions: Sampling, Rounding, Integration and Optimization. In: Proc. of the 47th IEEE Symposium on Foundations of Computer Science (2006)
Roberts, C., Casella, G.: Monte Carlo Statistical Methods. Springer, Heidelberg (2004)
Plummer, M., Best, N., Cowles, K., Vines, K.: CODA: Convergence Diagnosis and Output Analysis for MCMC. R News 6(1), 7-11 (2006), http://CRAN.R-project.org/doc/Rnews/
Sinclair, A.: Algorithms for Random Generation and Counting. Birkhauser, Basel (1993)
Sahai, A., Vadhan, S.: A complete promise problem for statistical zero-knowledge. In: Proceedings of the 38th Annual Symposium on the Foundations of Computer Science, pp. 448–457 (1997)
Saks, M.: Randomization and Derandomization in Space-bounded Computation. In: Proceedings of the 11th Annual IEEE Conference on Computational Complexity, pp. 128–149 (1996)
Savitch, W.J.: Relationships Between Nondeterministic and Deterministic Space Complexities. J. Comp. and Syst. Sci. 4(2), 177–192 (1970)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2011 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Bhatnagar, N., Bogdanov, A., Mossel, E. (2011). The Computational Complexity of Estimating MCMC Convergence Time. In: Goldberg, L.A., Jansen, K., Ravi, R., Rolim, J.D.P. (eds) Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques. APPROX RANDOM 2011 2011. Lecture Notes in Computer Science, vol 6845. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-22935-0_36
Download citation
DOI: https://doi.org/10.1007/978-3-642-22935-0_36
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-22934-3
Online ISBN: 978-3-642-22935-0
eBook Packages: Computer ScienceComputer Science (R0)