Regret bounds for restless Markov bandits

From MaRDI portal
Publication:465253

DOI10.1016/J.TCS.2014.09.026zbMATH Open1360.60090arXiv1209.2693OpenAlexW2178643644MaRDI QIDQ465253FDOQ465253


Authors: Ronald Ortner, Daniil Ryabko, Peter Auer, Rémi Munos Edit this on Wikidata


Publication date: 31 October 2014

Published in: Theoretical Computer Science (Search for Journal in Brave)

Abstract: We consider the restless Markov bandit problem, in which the state of each arm evolves according to a Markov process independently of the learner's actions. We suggest an algorithm that after T steps achieves ildeO(sqrtT) regret with respect to the best policy that knows the distributions of all arms. No assumptions on the Markov chains are made except that they are irreducible. In addition, we show that index-based policies are necessarily suboptimal for the considered problem.


Full work available at URL: https://arxiv.org/abs/1209.2693




Recommendations




Cites Work


Cited In (12)





This page was built for publication: Regret bounds for restless Markov bandits

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q465253)