On ergodic two-armed bandits
From MaRDI portal
Publication:417067
Abstract: A device has two arms with unknown deterministic payoffs and the aim is to asymptotically identify the best one without spending too much time on the other. The Narendra algorithm offers a stochastic procedure to this end. We show under weak ergodic assumptions on these deterministic payoffs that the procedure eventually chooses the best arm (i.e., with greatest Cesaro limit) with probability one for appropriate step sequences of the algorithm. In the case of i.i.d. payoffs, this implies a "quenched" version of the "annealed" result of Lamberton, Pag`{e}s and Tarr`{e}s [Ann. Appl. Probab. 14 (2004) 1424--1454] by the law of iterated logarithm, thus generalizing it. More precisely, if , , are the deterministic reward sequences we would get if we played at time , we obtain infallibility with the same assumption on nonincreasing step sequences on the payoffs as in Lamberton, Pag`{e}s and Tarr`{e}s [Ann. Appl. Probab. 14 (2004) 1424--1454], replacing the i.i.d. assumption by the hypothesis that the empirical averages and converge, as tends to infinity, respectively, to and , with rate at least , for some . We also show a fallibility result, that is, convergence with positive probability to the choice of the wrong arm, which implies the corresponding result of Lamberton, Pag`{e}s and Tarr`{e}s [Ann. Appl. Probab. 14 (2004) 1424--1454] in the i.i.d. case.
Recommendations
- scientific article; zbMATH DE number 3911530
- A two armed bandit type problem revisited
- Finite-time lower bounds for the two-armed bandit problem
- Further contributions to the two-armed bandit problem
- Randomization in the two-armed bandit problem
- On two continuum armed bandit problems in high dimensions
- Poissonian two-armed bandit: a new approach
- On the Bernoulli three-armed bandit problem
- On the problem of the two-armed bandit with impulse controls and discounting
Cites work
- scientific article; zbMATH DE number 3940388 (Why is no real title available?)
- A penalized bandit algorithm
- A two armed bandit type problem
- A two armed bandit type problem revisited
- How Fast Is the Bandit?
- Learning Automata - A Survey
- On the linear model with two absorbing barriers
- Stochastic algorithms
- Stochastic approximation with averaging innovation applied to finance
- The law of the iterated logarithm for additive functionals of Markov chains
- Use of Stochastic Automata for Parameter Self-Optimization with Multimodal Performance Criteria
- When can the two-armed bandit algorithm be trusted?
Cited in
(7)- Convergence in models with bounded expected relative hazard rates
- Evaluation of asymptotic approximations for a two-stage Bernoulli bandit
- When can the two-armed bandit algorithm be trusted?
- scientific article; zbMATH DE number 3911530 (Why is no real title available?)
- Some results on two-armed bandits when both projects vary
- Further contributions to the two-armed bandit problem
- How Fast Is the Bandit?
This page was built for publication: On ergodic two-armed bandits
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q417067)