Surprise probabilities in Markov chains

From MaRDI portal
Publication:5366964

DOI10.1017/S0963548317000074zbMATH Open1372.60107arXiv1408.0822MaRDI QIDQ5366964FDOQ5366964


Authors:


Publication date: 10 October 2017

Published in: Combinatorics, Probability and Computing (Search for Journal in Brave)

Abstract: In a Markov chain started at a state x, the hitting time au(y) is the first time that the chain reaches another state y. We study the probability mathbfPx(au(y)=t) that the first visit to y occurs precisely at a given time t. Informally speaking, the event that a new state is visited at a large time t may be considered a "surprise". We prove the following three bounds: 1) In any Markov chain with n states, mathbfPx(au(y)=t)lefracnt. 2) In a reversible chain with n states, mathbfPx(au(y)=t)lefracsqrt2nt for tge4n+4. 3) For random walk on a simple graph with nge2 vertices, mathbfPx(au(y)=t)lefrac4elognt. We construct examples showing that these bounds are close to optimal. The main feature of our bounds is that they require very little knowledge of the structure of the Markov chain. To prove the bound for random walk on graphs, we establish the following estimate conjectured by Aldous, Ding and Oveis-Gharan (private communication): For random walk on an n-vertex graph, for every initial vertex x, [ sum_y left( sup_{t ge 0} p^t(x, y) ight) = O(log n). ]


Full work available at URL: https://arxiv.org/abs/1408.0822




Recommendations



Cites Work


Cited In (4)





This page was built for publication: Surprise probabilities in Markov chains

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5366964)