Convergence of the Iterates in Mirror Descent Methods
From MaRDI portal
Publication:6301140
arXiv1805.01526MaRDI QIDQ6301140FDOQ6301140
Authors: Thinh T. Doan, Subhonmesh Bose, K. Nguyen, Carolyn L. Beck
Publication date: 3 May 2018
Abstract: We consider centralized and distributed mirror descent algorithms over a finite-dimensional Hilbert space, and prove that the problem variables converge to an optimizer of a possibly nonsmooth function when the step sizes are square summable but not summable. Prior literature has focused on the convergence of the function value to its optimum. However, applications from distributed optimization and learning in games require the convergence of the variables to an optimizer, which is generally not guaranteed without assuming strong convexity of the objective function. We provide numerical simulations comparing entropic mirror descent and standard subgradient methods for the robust regression problem.
This page was built for publication: Convergence of the Iterates in Mirror Descent Methods
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6301140)