On ergodic two-armed bandits
From MaRDI portal
Publication:417067
DOI10.1214/10-AAP751zbMATH Open1275.62056arXiv0905.0463OpenAlexW2136855133MaRDI QIDQ417067FDOQ417067
Authors: Pierre Tarrès, P. Vandekerkhove
Publication date: 13 May 2012
Published in: The Annals of Applied Probability (Search for Journal in Brave)
Abstract: A device has two arms with unknown deterministic payoffs and the aim is to asymptotically identify the best one without spending too much time on the other. The Narendra algorithm offers a stochastic procedure to this end. We show under weak ergodic assumptions on these deterministic payoffs that the procedure eventually chooses the best arm (i.e., with greatest Cesaro limit) with probability one for appropriate step sequences of the algorithm. In the case of i.i.d. payoffs, this implies a "quenched" version of the "annealed" result of Lamberton, Pag`{e}s and Tarr`{e}s [Ann. Appl. Probab. 14 (2004) 1424--1454] by the law of iterated logarithm, thus generalizing it. More precisely, if , , are the deterministic reward sequences we would get if we played at time , we obtain infallibility with the same assumption on nonincreasing step sequences on the payoffs as in Lamberton, Pag`{e}s and Tarr`{e}s [Ann. Appl. Probab. 14 (2004) 1424--1454], replacing the i.i.d. assumption by the hypothesis that the empirical averages and converge, as tends to infinity, respectively, to and , with rate at least , for some . We also show a fallibility result, that is, convergence with positive probability to the choice of the wrong arm, which implies the corresponding result of Lamberton, Pag`{e}s and Tarr`{e}s [Ann. Appl. Probab. 14 (2004) 1424--1454] in the i.i.d. case.
Full work available at URL: https://arxiv.org/abs/0905.0463
Recommendations
- scientific article; zbMATH DE number 3911530
- A two armed bandit type problem revisited
- Finite-time lower bounds for the two-armed bandit problem
- Further contributions to the two-armed bandit problem
- Randomization in the two-armed bandit problem
- On two continuum armed bandit problems in high dimensions
- Poissonian two-armed bandit: a new approach
- On the Bernoulli three-armed bandit problem
- On the problem of the two-armed bandit with impulse controls and discounting
Statistical aspects of information-theoretic topics (62B10) Sequential statistical design (62L05) Stochastic approximation (62L20)
Cites Work
- Learning Automata - A Survey
- Stochastic algorithms
- When can the two-armed bandit algorithm be trusted?
- On the linear model with two absorbing barriers
- Use of Stochastic Automata for Parameter Self-Optimization with Multimodal Performance Criteria
- A penalized bandit algorithm
- A two armed bandit type problem
- The law of the iterated logarithm for additive functionals of Markov chains
- Stochastic approximation with averaging innovation applied to finance
- A two armed bandit type problem revisited
- How Fast Is the Bandit?
- Title not available (Why is that?)
Cited In (7)
- When can the two-armed bandit algorithm be trusted?
- Title not available (Why is that?)
- Further contributions to the two-armed bandit problem
- Convergence in models with bounded expected relative hazard rates
- Evaluation of asymptotic approximations for a two-stage Bernoulli bandit
- Some results on two-armed bandits when both projects vary
- How Fast Is the Bandit?
This page was built for publication: On ergodic two-armed bandits
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q417067)