Abstract
As discussed in Chapter 2, an important drawback of Raftery and Lewis’ (1992a, 1996) convergence control method is that the discretized version of the Markov chain is not a Markov chain itself, unless a stringent lumpability condition holds (see Kemeny and Snell, 1960). This somehow invalidates the binary control method, although it provides useful preliminary information on the required number of iterations. However, the discrete aspect of the criterion remains attractive for its intuitive flavour and, while the Duality Principle of Chapter 1 cannot be invoked in every setting, this chapter shows how renewal theory can be used to construct a theoretically valid discretization method for general Markov chains. We then consider some convergence control methods based on these discretized chains, even though the chains can be used in many alternative ways (see also Chapter 5).
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1998 Springer Science+Business Media New York
About this chapter
Cite this chapter
Guihenneuc-Jouyaux, C., Robert, C.P. (1998). Valid Discretization via Renewal Theory. In: Robert, C.P. (eds) Discretization and MCMC Convergence Assessment. Lecture Notes in Statistics, vol 135. Springer, New York, NY. https://doi.org/10.1007/978-1-4612-1716-9_4
Download citation
DOI: https://doi.org/10.1007/978-1-4612-1716-9_4
Publisher Name: Springer, New York, NY
Print ISBN: 978-0-387-98591-6
Online ISBN: 978-1-4612-1716-9
eBook Packages: Springer Book Archive