A continuous-time approach to online optimization

From MaRDI portal
(Redirected from Publication:520967)




Abstract: We consider a family of learning strategies for online optimization problems that evolve in continuous time and we show that they lead to no regret. From a more traditional, discrete-time viewpoint, this continuous-time approach allows us to derive the no-regret properties of a large class of discrete-time algorithms including as special cases the exponential weight algorithm, online mirror descent, smooth fictitious play and vanishingly smooth fictitious play. In so doing, we obtain a unified view of many classical regret bounds, and we show that they can be decomposed into a term stemming from continuous-time considerations and a term which measures the disparity between discrete and continuous time. As a result, we obtain a general class of infinite horizon learning strategies that guarantee an mathcalO(n1/2) regret bound without having to resort to a doubling trick.









This page was built for publication: A continuous-time approach to online optimization

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q520967)