Forward-backward quasi-Newton methods for nonsmooth optimization problems
From MaRDI portal
Publication:2364124
DOI10.1007/S10589-017-9912-YzbMATH Open1401.90226arXiv1604.08096OpenAlexW2951948474MaRDI QIDQ2364124FDOQ2364124
Authors: Lorenzo Stella, Andreas Themelis, Panagiotis Patrinos
Publication date: 18 July 2017
Published in: Computational Optimization and Applications (Search for Journal in Brave)
Abstract: The forward-backward splitting method (FBS) for minimizing a nonsmooth composite function can be interpreted as a (variable-metric) gradient method over a continuously differentiable function which we call forward-backward envelope (FBE). This allows to extend algorithms for smooth unconstrained optimization and apply them to nonsmooth (possibly constrained) problems. Since the FBE and its gradient can be computed by simply evaluating forward-backward steps, the resulting methods rely on the very same black-box oracle as FBS. We propose an algorithmic scheme that enjoys the same global convergence properties of FBS when the problem is convex, or when the objective function possesses the Kurdyka-{L}ojasiewicz property at its critical points. Moreover, when using quasi-Newton directions the proposed method achieves superlinear convergence provided that usual second-order sufficiency conditions on the FBE hold at the limit point of the generated sequence. Such conditions translate into milder requirements on the original function involving generalized second-order differentiability. We show that BFGS fits our framework and that the limited-memory variant L-BFGS is well suited for large-scale problems, greatly outperforming FBS or its accelerated version in practice. The analysis of superlinear convergence is based on an extension of the Dennis and Mor'e theorem for the proposed algorithmic scheme.
Full work available at URL: https://arxiv.org/abs/1604.08096
Recommendations
- Forward-backward envelope for the sum of two nonconvex functions: further properties and nonmonotone linesearch algorithms
- On quasi-Newton forward-backward splitting: proximal calculus and convergence
- A quasi-Newton approach to nonsmooth convex optimization problems in machine learning
- A bundle-type quasi-Newton method for nonconvex nonsmooth optimization
- An approximate quasi-Newton bundle-type method for nonsmooth optimization
Cites Work
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- iPiano: inertial proximal algorithm for nonconvex optimization
- A fast algorithm for sparse reconstruction based on shrinkage, subspace optimization, and continuation
- Splitting Algorithms for the Sum of Two Nonlinear Operators
- Variational Analysis
- Title not available (Why is that?)
- Convex analysis and monotone operator theory in Hilbert spaces
- On the limited memory BFGS method for large scale optimization
- Gradient methods for minimizing composite functions
- Updating Quasi-Newton Matrices with Limited Storage
- Title not available (Why is that?)
- Exact matrix completion via convex optimization
- Title not available (Why is that?)
- A Nonlinear Conjugate Gradient Method with a Strong Global Convergence Property
- Proximal splitting methods in signal processing
- Proximal Alternating Minimization and Projection Methods for Nonconvex Problems: An Approach Based on the Kurdyka-Łojasiewicz Inequality
- Proximal alternating linearized minimization for nonconvex and nonsmooth problems
- Sparse Reconstruction by Separable Approximation
- Signal Recovery by Proximal Forward-Backward Splitting
- A trust region method for nonsmooth convex optimization
- A family of variable metric proximal methods
- Title not available (Why is that?)
- An accelerated proximal gradient algorithm for nuclear norm regularized linear least squares problems
- Quasi-Newton Methods, Motivation and Theory
- Monotone Operators and the Proximal Point Algorithm
- Using Complex Variables to Estimate Derivatives of Real Functions
- A Globally and Superlinearly Convergent Algorithm for Nonsmooth Convex Minimization
- Finite-Dimensional Variational Inequalities and Complementarity Problems
- Matrix mathematics. Theory, facts, and formulas
- Proximité et dualité dans un espace hilbertien
- Equivalent differentiable optimization problems and descent methods for asymmetric variational inequality problems
- On semi- and subanalytic geometry
- Unconstrained optimization reformulations of variational inequality problems
- Convergence of descent methods for semi-algebraic and tame problems: proximal algorithms, forward-backward splitting, and regularized Gauss-Seidel methods
- An inertial forward-backward algorithm for the minimization of the sum of two nonconvex functions
- Title not available (Why is that?)
- On gradients of functions definable in o-minimal structures
- Newton's Method for B-Differentiable Equations
- Title not available (Why is that?)
- The Łojasiewicz Inequality for Nonsmooth Subanalytic Functions with Applications to Subgradient Dynamical Systems
- Proximal quasi-Newton methods for nondifferentiable convex optimization
- The BFGS method with exact line searches fails for non-convex objective functions
- On the superlinear convergence of the variable metric proximal point algorithm using Broyden and BFGS matrix secant updating
- On the global convergence of the BFGS method for nonconvex unconstrained optimization problems
- On local convergence of the method of alternating projections
- A Tool for the Analysis of Quasi-Newton Methods with Application to Unconstrained Minimization
- Quasi-Newton Bundle-Type Methods for Nondifferentiable Convex Optimization
- Convergence Properties of the BFGS Algoritm
- On the convergence of the proximal algorithm for nonsmooth functions involving analytic features
- Practical Aspects of the Moreau--Yosida Regularization: Theoretical Preliminaries
- Practical inexact proximal quasi-Newton method with global complexity analysis
- Second-Order Optimality Conditions in Nonlinear Programming Obtained by Way of Epi-Derivatives
- First- and Second-Order Epi-Differentiability in Nonlinear Programming
- A Characterization of Superlinear Convergence and Its Application to Quasi-Newton Methods
- On the divergence of line search methods
- Amenable functions in optimization
- Generalized Hessian Properties of Regularized Nonsmooth Functions
- Local convergence of quasi-Newton methods for B-differentiable equations
- A global piecewise smooth Newton method for fast large-scale model predictive control
- Title not available (Why is that?)
- Exact penalty functions for constrained minimization problems via regularized gap function for variational inequalities
- Further properties of the forward-backward envelope with applications to difference-of-convex programming
- A perfect example for the BFGS method
Cited In (46)
- Convergence of inexact forward-backward algorithms using the forward-backward envelope
- Envelope functions: unifications and further properties
- Forward-backward envelope for the sum of two nonconvex functions: further properties and nonmonotone linesearch algorithms
- Role of subgradients in variational analysis of polyhedral functions
- Proximal gradient flow and Douglas-Rachford splitting dynamics: global exponential stability via integral quadratic constraints
- An inexact variable metric proximal point algorithm for generic quasi-Newton acceleration
- An inexact successive quadratic approximation method for a class of difference-of-convex optimization problems
- A Regularized Newton Method for \({\boldsymbol{\ell}}_{q}\) -Norm Composite Optimization Problems
- Incremental quasi-Newton algorithms for solving a nonconvex, nonsmooth, finite-sum optimization problem
- Kurdyka-Łojasiewicz exponent via inf-projection
- Accelerating inexact successive quadratic approximation for regularized optimization through manifold identification
- Manifold sampling for optimization of nonconvex functions that are piecewise linear compositions of smooth components
- A new homotopy proximal variable-metric framework for composite convex minimization
- Local convergence of the heavy-ball method and iPiano for non-convex optimization
- Adaptive FISTA for Nonconvex Optimization
- Further properties of the forward-backward envelope with applications to difference-of-convex programming
- An accelerated coordinate gradient descent algorithm for non-separable composite optimization
- Second order semi-smooth proximal Newton methods in Hilbert spaces
- Inexact proximal memoryless quasi-Newton methods based on the Broyden family for minimizing composite functions
- Globalized inexact proximal Newton-type methods for nonconvex composite functions
- Massively parallelizable proximal algorithms for large‐scale stochastic optimal control problems
- A Trust-region Method for Nonsmooth Nonconvex Optimization
- Douglas--Rachford Splitting and ADMM for Nonconvex Optimization: Tight Convergence Results
- The forward-backward splitting method and its convergence rate for the minimization of the sum of two functions in Banach spaces
- A Bregman forward-backward linesearch algorithm for nonconvex composite optimization: superlinear convergence to nonisolated local minima
- Template-based image reconstruction facing different topologies
- Forward-Backward Extragradient Methods for Quasimonotone Variational Inequalities
- A Chain Rule for Strict Twice Epi-Differentiability and Its Applications
- Nonmonotone globalization for Anderson acceleration via adaptive regularization
- On quasi-Newton forward-backward splitting: proximal calculus and convergence
- A stochastic semismooth Newton method for nonsmooth nonconvex optimization
- A VMiPG method for composite optimization with nonsmooth term having no closed-form proximal mapping
- Smoothing unadjusted Langevin algorithms for nonsmooth composite potential functions
- Inexact reduced gradient methods in nonconvex optimization
- The forward-backward envelope for sampling with the overdamped Langevin algorithm
- An abstract convergence framework with application to inertial inexact forward-backward methods
- Coordinate descent methods beyond smoothness and separability
- A family of optimal weighted conjugate-gradient-type methods for strictly convex quadratic minimization
- A forward-backward-forward algorithm for solving quasimonotone variational inequalities
- Douglas-Rachford splitting and ADMM for nonconvex optimization: accelerated and Newton-type linesearch algorithms
- A stochastic extra-step quasi-Newton method for nonsmooth nonconvex optimization
- An envelope for Davis-Yin splitting and strict saddle-point avoidance
- The developments of proximal point algorithms
- A proximal quasi-Newton method based on memoryless modified symmetric rank-one formula
- Globally convergent coderivative-based generalized Newton methods in nonsmooth optimization
- Newton-type methods with the proximal gradient step for sparse estimation
Uses Software
This page was built for publication: Forward-backward quasi-Newton methods for nonsmooth optimization problems
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2364124)