Convex-concave backtracking for inertial Bregman proximal gradient algorithms in nonconvex optimization

From MaRDI portal
Publication:5037570

DOI10.1137/19M1298007zbMATH Open1486.90147arXiv1904.03537MaRDI QIDQ5037570FDOQ5037570


Authors: Mahesh Chandra Mukkamala, Peter Ochs, Thomas Pock, Shoham Sabach Edit this on Wikidata


Publication date: 1 March 2022

Published in: SIAM Journal on Mathematics of Data Science (Search for Journal in Brave)

Abstract: Backtracking line-search is an old yet powerful strategy for finding a better step sizes to be used in proximal gradient algorithms. The main principle is to locally find a simple convex upper bound of the objective function, which in turn controls the step size that is used. In case of inertial proximal gradient algorithms, the situation becomes much more difficult and usually leads to very restrictive rules on the extrapolation parameter. In this paper, we show that the extrapolation parameter can be controlled by locally finding also a simple concave lower bound of the objective function. This gives rise to a double convex-concave backtracking procedure which allows for an adaptive choice of both the step size and extrapolation parameters. We apply this procedure to the class of inertial Bregman proximal gradient methods, and prove that any sequence generated by these algorithms converges globally to a critical point of the function at hand. Numerical experiments on a number of challenging non-convex problems in image processing and machine learning were conducted and show the power of combining inertial step and double backtracking strategy in achieving improved performances.


Full work available at URL: https://arxiv.org/abs/1904.03537




Recommendations




Cites Work


Cited In (23)

Uses Software





This page was built for publication: Convex-concave backtracking for inertial Bregman proximal gradient algorithms in nonconvex optimization

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5037570)