Faster Rates for Policy Learning

From MaRDI portal
Publication:6285779

arXiv1704.06431MaRDI QIDQ6285779FDOQ6285779


Authors: Alexander R. Luedtke, Antoine Chambaz Edit this on Wikidata


Publication date: 21 April 2017

Abstract: This article improves the existing proven rates of regret decay in optimal policy estimation. We give a margin-free result showing that the regret decay for estimating a within-class optimal policy is second-order for empirical risk minimizers over Donsker classes, with regret decaying at a faster rate than the standard error of an efficient estimator of the value of an optimal policy. We also give a result from the classification literature that shows that faster regret decay is possible via plug-in estimation provided a margin condition holds. Four examples are considered. In these examples, the regret is expressed in terms of either the mean value or the median value; the number of possible actions is either two or finitely many; and the sampling scheme is either independent and identically distributed or sequential, where the latter represents a contextual bandit sampling scheme.













This page was built for publication: Faster Rates for Policy Learning

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6285779)