Local convergence of the heavy-ball method and iPiano for non-convex optimization
From MaRDI portal
(Redirected from Publication:1637355)
Abstract: A local convergence result for abstract descent methods is proved. The sequence of iterates is attracted by a local (or global) minimum, stays in its neighborhood and converges within this neighborhood. This result allows algorithms to exploit local properties of the objective function. In particular, the abstract theory in this paper applies to the inertial forward--backward splitting method: iPiano---a generalization of the Heavy-ball method. Moreover, it reveals an equivalence between iPiano and inertial averaged/alternating proximal minimization and projection methods. Key for this equivalence is the attraction to a local minimum within a neighborhood and the fact that, for a prox-regular function, the gradient of the Moreau envelope is locally Lipschitz continuous and expressible in terms of the proximal mapping. In a numerical feasibility problem, the inertial alternating projection method significantly outperforms its non-inertial variants.
Recommendations
- Convergence rates of the heavy ball method for quasi-strongly convex optimization
- From global to local convergence of interior methods for nonlinear optimization
- Local convergence of the interior-point Newton method for general nonlinear programming
- Local convexification of the Lagrangian function for nonlinear nonconvex optimization
- Convergence rates for the heavy-ball continuous dynamics for non-convex optimization, under Polyak-Łojasiewicz condition
- On the local convergence of a stochastic semismooth Newton method for nonsmooth nonconvex optimization
- Local saddle point and a class of convexification methods for nonconvex optimization problems
- Locally optimal and heavy ball GMRES methods
- Local convergence of the affine-scaling interior-point algorithm for nonlinear programming
- Local convexification of the Lagrangian function in nonconvex optimization
Cites work
- scientific article; zbMATH DE number 1201576 (Why is no real title available?)
- scientific article; zbMATH DE number 1160037 (Why is no real title available?)
- scientific article; zbMATH DE number 3371284 (Why is no real title available?)
- A block coordinate descent method for regularized multiconvex optimization with applications to nonnegative tensor factorization and completion
- A block coordinate variable metric forward-backward algorithm
- A generalized inexact proximal point method for nonsmooth functions that satisfies Kurdyka Łojasiewicz inequality
- A globally convergent algorithm for nonconvex optimization based on block coordinate update
- A nonsmooth Morse--Sard theorem for subanalytic functions
- Alternating Projections on Manifolds
- An inertial Tseng's type proximal algorithm for nonsmooth and nonconvex optimization problems
- An inertial forward-backward algorithm for the minimization of the sum of two nonconvex functions
- Calculus of the exponent of Kurdyka-Łojasiewicz inequality and its applications to linear convergence of first-order methods
- Characterizations of Łojasiewicz inequalities: Subgradient flows, talweg, convexity
- Clarke Subgradients of Stratifiable Functions
- Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward-backward splitting, and regularized Gauss-Seidel methods
- Convergence of non-smooth descent methods using the Kurdyka-Łojasiewicz inequality
- Convergence of the Iterates of Descent Methods for Analytic Cost Functions
- Convergence to equilibrium for the backward Euler scheme and applications
- Convex analysis and monotone operator theory in Hilbert spaces
- Differential properties of the Moreau envelope
- Douglas-Rachford splitting for nonconvex optimization with application to nonconvex feasibility problems
- Error bounds for parametric polynomial systems with applications to higher-order stability analysis and convergence rates
- Forward-backward quasi-Newton methods for nonsmooth optimization problems
- From error bounds to the complexity of first-order descent methods for convex functions
- Global convergence of splitting methods for nonconvex composite optimization
- Heavy-ball method in nonconvex optimization problems
- Inertial proximal alternating linearized minimization (iPALM) for nonconvex and nonsmooth problems
- Integration of subdifferentials of nonconvex functions
- Local differentiability of distance functions
- Local linear convergence for alternating and averaged nonconvex projections
- Majorization-minimization procedures and convergence of SQP methods for semi-algebraic and tame programs
- New fractional error bounds for polynomial systems with applications to Hölderian stability in optimization and spectral theory of tensors
- On gradients of functions definable in o-minimal structures
- On iteratively reweighted algorithms for nonsmooth nonconvex optimization in computer vision
- On semi- and subanalytic geometry
- On the convergence of a linesearch based proximal-gradient method for nonconvex optimization
- On the convergence of the proximal algorithm for nonsmooth functions involving analytic features
- Peaceman-Rachford splitting for a class of nonconvex optimization problems
- Prox-regular functions in variational analysis
- Prox-regularity of spectral functions and spectral sets
- Proximal Alternating Minimization and Projection Methods for Nonconvex Problems: An Approach Based on the Kurdyka-Łojasiewicz Inequality
- Proximal alternating linearized minimization for nonconvex and nonsmooth problems
- Some methods of speeding up the convergence of iteration methods
- Splitting methods with variable metric for Kurdyka-Łojasiewicz functions and general convergence rates
- The Łojasiewicz Inequality for Nonsmooth Subanalytic Functions with Applications to Subgradient Dynamical Systems
- Variable metric forward-backward algorithm for minimizing the sum of a differentiable function and a convex function
- Variational Analysis
- iPiano: inertial proximal algorithm for nonconvex optimization
Cited in
(16)- Convergence Analysis of the Proximal Gradient Method in the Presence of the Kurdyka–Łojasiewicz Property Without Global Lipschitz Assumptions
- Inertial Newton algorithms avoiding strict saddle points
- A Regularized Newton Method for \({\boldsymbol{\ell}}_{q}\) -Norm Composite Optimization Problems
- Convex-concave backtracking for inertial Bregman proximal gradient algorithms in nonconvex optimization
- Convergence of non-smooth descent methods using the Kurdyka-Łojasiewicz inequality
- Convergence rates for the heavy-ball continuous dynamics for non-convex optimization, under Polyak-Łojasiewicz condition
- Distributed stochastic inertial-accelerated methods with delayed derivatives for nonconvex problems
- Convergence of the Momentum Method for Semialgebraic Functions with Locally Lipschitz Gradients
- Unifying abstract inexact convergence theorems and block coordinate variable metric iPiano
- Convergence rates for an inertial algorithm of gradient type associated to a smooth non-convex minimization
- Non-monotone Behavior of the Heavy Ball Method
- An abstract convergence framework with application to inertial inexact forward-backward methods
- Bregman proximal mappings and Bregman-Moreau envelopes under relative prox-regularity
- Inertial proximal incremental aggregated gradient method with linear convergence guarantees
- Learning weakly convex regularizers for convergent image-reconstruction algorithms
- A forward-backward algorithm with different inertial terms for structured non-convex minimization problems
This page was built for publication: Local convergence of the heavy-ball method and iPiano for non-convex optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q1637355)