Accelerated Bregman proximal gradient methods for relatively smooth convex optimization

From MaRDI portal
Publication:2044481

DOI10.1007/S10589-021-00273-8zbMATH Open1473.90114arXiv1808.03045OpenAlexW3140735157MaRDI QIDQ2044481FDOQ2044481


Authors: Filip Hanzely, Peter Richtárik, Lin Xiao Edit this on Wikidata


Publication date: 9 August 2021

Published in: Computational Optimization and Applications (Search for Journal in Brave)

Abstract: We consider the problem of minimizing the sum of two convex functions: one is differentiable and relatively smooth with respect to a reference convex function, and the other can be nondifferentiable but simple to optimize. We investigate a triangle scaling property of the Bregman distance generated by the reference convex function and present accelerated Bregman proximal gradient (ABPG) methods that attain an O(kgamma) convergence rate, where gammain(0,2] is the triangle scaling exponent (TSE) of the Bregman distance. For the Euclidean distance, we have gamma=2 and recover the convergence rate of Nesterov's accelerated gradient methods. For non-Euclidean Bregman distances, the TSE can be much smaller (say gammaleq1), but we show that a relaxed definition of intrinsic TSE is always equal to 2. We exploit the intrinsic TSE to develop adaptive ABPG methods that converge much faster in practice. Although theoretical guarantees on a fast convergence rate seem to be out of reach in general, our methods obtain empirical O(k2) rates in numerical experiments on several applications and provide posterior numerical certificates for the fast rates.


Full work available at URL: https://arxiv.org/abs/1808.03045




Recommendations




Cites Work


Cited In (27)

Uses Software





This page was built for publication: Accelerated Bregman proximal gradient methods for relatively smooth convex optimization

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2044481)