The multi-armed bandit, with constraints

From MaRDI portal
Publication:378726

DOI10.1007/S10479-012-1250-YzbMATH Open1274.90470arXiv1203.4640OpenAlexW1965106111MaRDI QIDQ378726FDOQ378726


Authors: Eric V. Denardo, Eugene A. Feinberg, Uriel G. Rothblum Edit this on Wikidata


Publication date: 12 November 2013

Published in: Annals of Operations Research (Search for Journal in Brave)

Abstract: The early sections of this paper present an analysis of a Markov decision model that is known as the multi-armed bandit under the assumption that the utility function of the decision maker is either linear or exponential. The analysis includes efficient procedures for computing the expected utility associated with the use of a priority policy and for identifying a priority policy that is optimal. The methodology in these sections is novel, building on the use of elementary row operations. In the later sections of this paper, the analysis is adapted to accommodate constraints that link the bandits.


Full work available at URL: https://arxiv.org/abs/1203.4640




Recommendations




Cites Work


Cited In (14)





This page was built for publication: The multi-armed bandit, with constraints

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q378726)