A continuous-time approach to online optimization

From MaRDI portal
Publication:520967

DOI10.3934/JDG.2017008zbMATH Open1359.90095arXiv1401.6956OpenAlexW2964294891WikidataQ60142047 ScholiaQ60142047MaRDI QIDQ520967FDOQ520967


Authors: Yong-Cai Geng, Sumit K. Garg Edit this on Wikidata


Publication date: 6 April 2017

Published in: Journal of Dynamics and Games (Search for Journal in Brave)

Abstract: We consider a family of learning strategies for online optimization problems that evolve in continuous time and we show that they lead to no regret. From a more traditional, discrete-time viewpoint, this continuous-time approach allows us to derive the no-regret properties of a large class of discrete-time algorithms including as special cases the exponential weight algorithm, online mirror descent, smooth fictitious play and vanishingly smooth fictitious play. In so doing, we obtain a unified view of many classical regret bounds, and we show that they can be decomposed into a term stemming from continuous-time considerations and a term which measures the disparity between discrete and continuous time. As a result, we obtain a general class of infinite horizon learning strategies that guarantee an mathcalO(n1/2) regret bound without having to resort to a doubling trick.


Full work available at URL: https://arxiv.org/abs/1401.6956




Recommendations




Cites Work


Cited In (18)





This page was built for publication: A continuous-time approach to online optimization

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q520967)