Mirror Descent for Constrained Optimization Problems with Large Subgradient Values
From MaRDI portal
Publication:6323006
DOI10.20537/2076-7633-2020-12-2-301-317arXiv1908.00218MaRDI QIDQ6323006FDOQ6323006
Authors: F. S. Stonyakin, A. N. Stepanov, A. V. Gasnikov, Alexander A. Titov
Publication date: 1 August 2019
Abstract: Based on the ideas of arXiv:1710.06612, we consider the problem of minimization of the Holder-continuous non-smooth functional with non-positive convex (generally, non-smooth) Lipschitz-continuous functional constraint. We propose some novel strategies of step-sizes and adaptive stopping rules in Mirror Descent algorithms for the considered class of problems. It is shown that the methods are applicable to the objective functionals of various levels of smoothness. Applying the restart technique to the Mirror Descent Algorithm there was proposed an optimal method to solve optimization problems with strongly convex objective functionals. Estimates of the rate of convergence of the considered algorithms are obtained depending on the level of smoothness of the objective functional. These estimates indicate the optimality of considered methods from the point of view of the theory of lower oracle bounds. In addition, the case of a quasi-convex objective functional and constraint was considered.
This page was built for publication: Mirror Descent for Constrained Optimization Problems with Large Subgradient Values
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6323006)