Surprise probabilities in Markov chains
From MaRDI portal
Publication:5366964
DOI10.1017/S0963548317000074zbMATH Open1372.60107arXiv1408.0822MaRDI QIDQ5366964FDOQ5366964
Authors:
Publication date: 10 October 2017
Published in: Combinatorics, Probability and Computing (Search for Journal in Brave)
Abstract: In a Markov chain started at a state , the hitting time is the first time that the chain reaches another state . We study the probability that the first visit to occurs precisely at a given time . Informally speaking, the event that a new state is visited at a large time may be considered a "surprise". We prove the following three bounds: 1) In any Markov chain with states, . 2) In a reversible chain with states, for . 3) For random walk on a simple graph with vertices, . We construct examples showing that these bounds are close to optimal. The main feature of our bounds is that they require very little knowledge of the structure of the Markov chain. To prove the bound for random walk on graphs, we establish the following estimate conjectured by Aldous, Ding and Oveis-Gharan (private communication): For random walk on an -vertex graph, for every initial vertex , [ sum_y left( sup_{t ge 0} p^t(x, y)
ight) = O(log n). ]
Full work available at URL: https://arxiv.org/abs/1408.0822
Recommendations
Cites Work
- Markov chains and mixing times. With a chapter on ``Coupling from the past by James G. Propp and David B. Wilson.
- Title not available (Why is that?)
- Probability. Theory and examples.
- Title not available (Why is that?)
- Intersections of random walks.
- On absorption times and Dirichlet eigenvalues
- A self-avoiding random walk
- Characterization of cutoff for reversible Markov chains
- Operator Limit Theorems
- Nonconcentration of return times
- Total variation cutoff in a tree
Cited In (4)
This page was built for publication: Surprise probabilities in Markov chains
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5366964)