How linear reinforcement affects Donsker's theorem for empirical processes

From MaRDI portal
Publication:2210751




Abstract: A reinforcement algorithm introduced by H.A. Simon cite{Simon} produces a sequence of uniform random variables with memory as follows. At each step, with a fixed probability pin(0,1), hatUn+1 is sampled uniformly from hatU1,ldots,hatUn, and with complementary probability 1p, hatUn+1 is a new independent uniform variable. The Glivenko-Cantelli theorem remains valid for the reinforced empirical measure, but not the Donsker theorem. Specifically, we show that the sequence of empirical processes converges in law to a Brownian bridge only up to a constant factor when p<1/2, and that a further rescaling is needed when p>1/2 and the limit is then a bridge with exchangeable increments and discontinuous paths. This is related to earlier limit theorems for correlated Bernoulli processes, the so-called elephant random walk, and more generally step reinforced random walks.









This page was built for publication: How linear reinforcement affects Donsker's theorem for empirical processes

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2210751)