Simple regret optimization in online planning for Markov decision processes

From MaRDI portal
Publication:2921080

DOI10.1613/JAIR.4432zbMATH Open1366.90216arXiv1206.3382OpenAlexW2157136665MaRDI QIDQ2921080FDOQ2921080


Authors: Zohar Feldman, Carmel Domshlak Edit this on Wikidata


Publication date: 30 September 2014

Published in: The Journal of Artificial Intelligence Research (JAIR) (Search for Journal in Brave)

Abstract: We consider online planning in Markov decision processes (MDPs). In online planning, the agent focuses on its current state only, deliberates about the set of possible policies from that state onwards and, when interrupted, uses the outcome of that exploratory deliberation to choose what action to perform next. The performance of algorithms for online planning is assessed in terms of simple regret, which is the agent's expected performance loss when the chosen action, rather than an optimal one, is followed. To date, state-of-the-art algorithms for online planning in general MDPs are either best effort, or guarantee only polynomial-rate reduction of simple regret over time. Here we introduce a new Monte-Carlo tree search algorithm, BRUE, that guarantees exponential-rate reduction of simple regret and error probability. This algorithm is based on a simple yet non-standard state-space sampling scheme, MCTS2e, in which different parts of each sample are dedicated to different exploratory objectives. Our empirical evaluation shows that BRUE not only provides superior performance guarantees, but is also very effective in practice and favorably compares to state-of-the-art. We then extend BRUE with a variant of "learning by forgetting." The resulting set of algorithms, BRUE(alpha), generalizes BRUE, improves the exponential factor in the upper bound on its reduction rate, and exhibits even more attractive empirical performance.


Full work available at URL: https://arxiv.org/abs/1206.3382




Recommendations




Cited In (3)





This page was built for publication: Simple regret optimization in online planning for Markov decision processes

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2921080)