Surprise probabilities in Markov chains
From MaRDI portal
Publication:5366964
Abstract: In a Markov chain started at a state , the hitting time is the first time that the chain reaches another state . We study the probability that the first visit to occurs precisely at a given time . Informally speaking, the event that a new state is visited at a large time may be considered a "surprise". We prove the following three bounds: 1) In any Markov chain with states, . 2) In a reversible chain with states, for . 3) For random walk on a simple graph with vertices, . We construct examples showing that these bounds are close to optimal. The main feature of our bounds is that they require very little knowledge of the structure of the Markov chain. To prove the bound for random walk on graphs, we establish the following estimate conjectured by Aldous, Ding and Oveis-Gharan (private communication): For random walk on an -vertex graph, for every initial vertex , [ sum_y left( sup_{t ge 0} p^t(x, y)
ight) = O(log n). ]
Recommendations
Cites work
- scientific article; zbMATH DE number 1256746 (Why is no real title available?)
- scientific article; zbMATH DE number 3255204 (Why is no real title available?)
- A self-avoiding random walk
- Characterization of cutoff for reversible Markov chains
- Intersections of random walks.
- Markov chains and mixing times. With a chapter on ``Coupling from the past by James G. Propp and David B. Wilson.
- Nonconcentration of return times
- On absorption times and Dirichlet eigenvalues
- Operator Limit Theorems
- Probability. Theory and examples.
- Total variation cutoff in a tree
Cited in
(4)
This page was built for publication: Surprise probabilities in Markov chains
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5366964)