Convex-concave backtracking for inertial Bregman proximal gradient algorithms in nonconvex optimization
From MaRDI portal
Publication:5037570
Abstract: Backtracking line-search is an old yet powerful strategy for finding a better step sizes to be used in proximal gradient algorithms. The main principle is to locally find a simple convex upper bound of the objective function, which in turn controls the step size that is used. In case of inertial proximal gradient algorithms, the situation becomes much more difficult and usually leads to very restrictive rules on the extrapolation parameter. In this paper, we show that the extrapolation parameter can be controlled by locally finding also a simple concave lower bound of the objective function. This gives rise to a double convex-concave backtracking procedure which allows for an adaptive choice of both the step size and extrapolation parameters. We apply this procedure to the class of inertial Bregman proximal gradient methods, and prove that any sequence generated by these algorithms converges globally to a critical point of the function at hand. Numerical experiments on a number of challenging non-convex problems in image processing and machine learning were conducted and show the power of combining inertial step and double backtracking strategy in achieving improved performances.
Recommendations
- Inertial proximal gradient methods with Bregman regularization for a class of nonconvex optimization problems
- A gradient-type algorithm with backward inertial steps associated to a nonconvex minimization problem
- Backtracking strategies for accelerated descent methods with smooth composite objectives
- General inertial proximal gradient method for a class of nonconvex nonsmooth optimization problems
- iPiano: inertial proximal algorithm for nonconvex optimization
Cites work
- scientific article; zbMATH DE number 3850830 (Why is no real title available?)
- scientific article; zbMATH DE number 1046019 (Why is no real title available?)
- scientific article; zbMATH DE number 3296905 (Why is no real title available?)
- scientific article; zbMATH DE number 3371284 (Why is no real title available?)
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- A descent lemma beyond Lipschitz gradient continuity: first-order methods revisited and applications
- A simplified view of first order methods for optimization
- An inertial forward-backward algorithm for the minimization of the sum of two nonconvex functions
- An iterative row-action method for interval convex programming
- Analysis of the Recovery of Edges in Images and Signals by Minimizing Nonconvex Regularized Least-Squares
- Characterizations of Łojasiewicz inequalities: Subgradient flows, talweg, convexity
- Clarke Subgradients of Stratifiable Functions
- Convergence Analysis of a Proximal-Like Minimization Algorithm Using Bregman Functions
- Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward-backward splitting, and regularized Gauss-Seidel methods
- Enhancing sparsity by reweighted \(\ell _{1}\) minimization
- Entropic Proximal Mappings with Applications to Nonlinear Programming
- First order methods beyond convexity and Lipschitz gradient continuity with applications to quadratic inverse problems
- Inertial proximal alternating linearized minimization (iPALM) for nonconvex and nonsmooth problems
- Interior Gradient and Proximal Methods for Convex and Conic Optimization
- Linear convergence of proximal gradient algorithm with extrapolation for a class of nonconvex nonsmooth minimization problems
- Local convergence of the heavy-ball method and iPiano for non-convex optimization
- Non-smooth non-convex Bregman minimization: unification and new algorithms
- Nonlinear Proximal Point Algorithms Using Bregman Functions, with Applications to Convex Programming
- On gradients of functions definable in o-minimal structures
- On the convergence of the proximal algorithm for nonsmooth functions involving analytic features
- Phase retrieval via Wirtinger flow: theory and algorithms
- Proximal Alternating Minimization and Projection Methods for Nonconvex Problems: An Approach Based on the Kurdyka-Łojasiewicz Inequality
- Proximal alternating linearized minimization for nonconvex and nonsmooth problems
- Proximal minimization algorithm with \(D\)-functions
- Proximité et dualité dans un espace hilbertien
- Solving (most) of a set of quadratic equalities: composite optimization for robust phase retrieval
- Solving Systems of Random Quadratic Equations via Truncated Amplitude Flow
- Some methods of speeding up the convergence of iteration methods
- The elements of statistical learning. Data mining, inference, and prediction
- The Łojasiewicz Inequality for Nonsmooth Subanalytic Functions with Applications to Subgradient Dynamical Systems
- Unifying abstract inexact convergence theorems and block coordinate variable metric iPiano
- Variational Analysis
- Variational phase retrieval with globally convergent preconditioned proximal algorithm
- iPiano: inertial proximal algorithm for nonconvex optimization
Cited in
(23)- Bregman proximal gradient algorithms for deep matrix factorization
- Block Bregman majorization minimization with extrapolation
- Multi-block Bregman proximal alternating linearized minimization and its application to orthogonal nonnegative matrix factorization
- The backtrack Hölder gradient method with application to min-max and min-min problems
- New Bregman proximal type algoritms for solving DC optimization problems
- A class of modified accelerated proximal gradient methods for nonsmooth and nonconvex minimization problems
- A stochastic two-step inertial Bregman proximal alternating linearized minimization algorithm for nonconvex and nonsmooth problems
- A refined inertial DC algorithm for DC programming
- A nonmonotone accelerated proximal gradient method with variable stepsize strategy for nonsmooth and nonconvex minimization problems
- Inertial proximal gradient methods with Bregman regularization for a class of nonconvex optimization problems
- Provable Phase Retrieval with Mirror Descent
- Backtracking strategies for accelerated descent methods with smooth composite objectives
- Optimal complexity and certification of Bregman first-order methods
- Global convergence of model function based Bregman proximal minimization algorithms
- A Bregman forward-backward linesearch algorithm for nonconvex composite optimization: superlinear convergence to nonisolated local minima
- Bregman proximal point type algorithms for quasiconvex minimization
- Convergence Analysis for Bregman Iterations in Minimizing a Class of Landau Free Energy Functionals
- A unified Bregman alternating minimization algorithm for generalized DC programs with application to imaging
- An alternating structure-adapted Bregman proximal gradient descent algorithm for constrained nonconvex nonsmooth optimization problems and its inertial variant
- A Bregman stochastic method for nonconvex nonsmooth problem beyond global Lipschitz gradient continuity
- Bregman proximal mappings and Bregman-Moreau envelopes under relative prox-regularity
- Stochastic composition optimization of functions without Lipschitz continuous gradient
- Learning consistent discretizations of the total variation
This page was built for publication: Convex-concave backtracking for inertial Bregman proximal gradient algorithms in nonconvex optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5037570)