How linear reinforcement affects Donsker's theorem for empirical processes

From MaRDI portal
Publication:2210751

DOI10.1007/S00440-020-01001-9zbMATH Open1478.60108arXiv2005.11986OpenAlexW3087345653MaRDI QIDQ2210751FDOQ2210751

Jean Bertoin

Publication date: 8 November 2020

Published in: Zeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete (Search for Journal in Brave)

Abstract: A reinforcement algorithm introduced by H.A. Simon cite{Simon} produces a sequence of uniform random variables with memory as follows. At each step, with a fixed probability pin(0,1), hatUn+1 is sampled uniformly from hatU1,ldots,hatUn, and with complementary probability 1p, hatUn+1 is a new independent uniform variable. The Glivenko-Cantelli theorem remains valid for the reinforced empirical measure, but not the Donsker theorem. Specifically, we show that the sequence of empirical processes converges in law to a Brownian bridge only up to a constant factor when p<1/2, and that a further rescaling is needed when p>1/2 and the limit is then a bridge with exchangeable increments and discontinuous paths. This is related to earlier limit theorems for correlated Bernoulli processes, the so-called elephant random walk, and more generally step reinforced random walks.


Full work available at URL: https://arxiv.org/abs/2005.11986





Cites Work


Cited In (5)






This page was built for publication: How linear reinforcement affects Donsker's theorem for empirical processes

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2210751)