Empirical Convergence Rate of a Markov Transition Matrix

Main Article Content

Steven T. Garren

Abstract

The convergence rate of a Markov transition matrix is governed by the second largest eigenvalue, where the first largest eigenvalue is unity, under general regularity conditions. Garren and Smith (2000) constructed confidence intervals on this second largest eigenvalue, based on asymptotic normality theory, and performed simulations, which were somewhat limited in scope due to the reduced computing power of that time period. Herein we focus on simulating coverage intervals, using the advanced computing power of our current time period. Thus, we compare our simulated coverage intervals to the theoretical confidence intervals from Garren and Smith (2000).

Keywords:
Markov chain Monte Carlo, Gibbs sampling, Hilbert-Schmidt operator, eigenvalue

Article Details

How to Cite
Garren, S. (2019). Empirical Convergence Rate of a Markov Transition Matrix. Asian Journal of Probability and Statistics, 3(4), 1-7. Retrieved from http://journalajpas.com/index.php/AJPAS/article/view/30101
Section
Original Research Article

References

Garren ST, Smith RL. Estimating the second largest eigenvalue of a Markov transition matrix. Bernoulli.
2000;6(2):215-242.

Gamerman D, Lopes HF. Markov chain Monte Carlo: Stochastic simulation for bayesian inference. 2nd
Edition, Chapman and Hall, New York; 2006.

Van Ravenzwaaij D, Cassey P, Brown SD. A simple introduction to Markov chain Monte Carlo sampling.
Psychonomic Bulletin & Review. 2018;25(1):143-154.

Cowles MK, Carlin BP. Markov chain Monte Carlo convergence diagnostics: A comparative review. Journal
of the American Statistical Association. 1996;91(434):883-904.

Quiroz M, Kohn R, Villani M, Tran MN. Speeding up MCMC by ecient data subsampling. Journal of the
American Statistical Association. 2018;1-35.

Denwood MJ. Runjags: An R package providing interface utilities, model templates, parallel computing
methods and additional distributions for MCMC models in JAGS. Journal of Statistical Software.
2016;71(9):1-25.

Robert CP, Elvira V, Tawn N, Wu C. Accelerating MCMC algorithms. WIREs Computational Statistics.
2018;10:e1345.

Nylander JAA, Wilgenbusch JC, Warren DL, Swoord DL. AWTY (are we there yet?): A system for
graphical exploration of MCMC convergence in Bayesian phylogenetics. Bioinformatics. 2008;24(4):581-

Fulton BJ, Petigura EA, Blunt S, Sinu ko E. RadVel: The radial velocity modeling toolkit. Publications of
the Astronomical Society of the Pacific. 2018;130(986):044504.

Raftery AE, Lewis SM. How many iterations in the Gibbs sampler? In Bernardo JM, Berger JO, Dawid AP,
and Smith AFM (eds), Bayesian Statistics. New York:Oxford University Press. 1992;4:763-773.

Garren; AJPAS, 3(4), 1-7, 2019; Article no.AJPAS.49023

Gelfand AE, Smith AFM. Sampling-based approaches to calculating marginal densities. Journal of
American Statistical Association. 1990;85:398-409.

Tierney L. Markov chains for exploring posterior distributions (with discussion). Annals of Statistics.
1994;22:1701-1762.

R Core Team. R: A language and environment for statistical computing. R Foundation for Statistical
Computing, Vienna, Austria; 2019.
Available:http://www.R-project.org