Error Bounds, Quadratic Growth, and Linear Convergence of Proximal Methods
From MaRDI portal
Publication:5219676
DOI10.1287/moor.2017.0889zbMath1440.90046arXiv1602.06661OpenAlexW2963353224MaRDI QIDQ5219676
Adrian S. Lewis, Dmitriy Drusvyatskiy
Publication date: 12 March 2020
Published in: Mathematics of Operations Research (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1602.06661
error boundsubdifferentiallinear convergencequadratic growthproximal algorithmsubregularitytilt-stability
Convex programming (90C25) Numerical optimization and variational techniques (65K10) Sensitivity, stability, parametric optimization (90C31) Methods of successive quadratic programming type (90C55)
Related Items
A Unified Analysis of Descent Sequences in Weakly Convex Optimization, Including Convergence Rates for Bundle Methods, Convergence results of a new monotone inertial forward-backward splitting algorithm under the local Hölder error bound condition, Parallel random block-coordinate forward-backward algorithm: a unified convergence analysis, On the Complexity Analysis of the Primal Solutions for the Accelerated Randomized Dual Coordinate Ascent, Proximal methods avoid active strict saddles of weakly convex functions, A stochastic extra-step quasi-Newton method for nonsmooth nonconvex optimization, Scaled relative graphs: nonexpansive operators via 2D Euclidean geometry, Global convergence of model function based Bregman proximal minimization algorithms, Error bound conditions and convergence of optimization methods on smooth and proximally smooth manifolds, Stability of minimization problems and the error bound condition, Quadratic growth and strong metric subregularity of the subdifferential for a class of non-prox-regular functions, Stochastic Multilevel Composition Optimization Algorithms with Level-Independent Convergence Rates, Stochastic Methods for Composite and Weakly Convex Optimization Problems, Kurdyka-Łojasiewicz exponent via inf-projection, On the Convergence of Stochastic Primal-Dual Hybrid Gradient, Golden Ratio Primal-Dual Algorithm with Linesearch, Quadratic growth conditions and uniqueness of optimal solution to Lasso, The equivalence of three types of error bounds for weakly and approximately convex functions, Linear convergence of first order methods for non-strongly convex optimization, Linear convergence of Frank-Wolfe for rank-one matrix recovery without strong convexity, A diagonal finite element-projection-proximal gradient algorithm for elliptic optimal control problem, Inexact successive quadratic approximation for regularized optimization, Non-smooth non-convex Bregman minimization: unification and new algorithms, The multiproximal linearization method for convex composite problems, Linearly-convergent FISTA variant for composite optimization with duality, Revisiting Spectral Bundle Methods: Primal-Dual (Sub)linear Convergence Rates, Trimmed Statistical Estimation via Variance Reduction, Thresholding gradient methods in Hilbert spaces: support identification and linear convergence, A globally convergent proximal Newton-type method in nonsmooth convex optimization, A strict complementarity approach to error bound and sensitivity of solution of conic programs, Consistent approximations in composite optimization, Accelerating inexact successive quadratic approximation for regularized optimization through manifold identification, Convergence of an asynchronous block-coordinate forward-backward algorithm for convex composite optimization, Calculus rules of the generalized concave Kurdyka-Łojasiewicz property, Quadratic error bound of the smoothed gap and the restarted averaged primal-dual hybrid gradient, Proximal gradient/semismooth Newton methods for projection onto a polyhedron via the duality-gap-active-set strategy, Convergence Rate Analysis of a Dykstra-Type Projection Algorithm, Sample Complexity of Sample Average Approximation for Conditional Stochastic Optimization, Linearized proximal algorithms with adaptive stepsizes for convex composite optimization with applications, Error bound and isocost imply linear convergence of DCA-based algorithms to D-stationarity, No Gap Second-Order Optimality Conditions for Circular Conic Programs, Stochastic Model-Based Minimization of Weakly Convex Functions, On the gradient projection method for weakly convex functions on a proximally smooth set, Nonsmooth optimization using Taylor-like models: error bounds, convergence, and termination criteria, The gradient projection algorithm for a proximally smooth set and a function with Lipschitz continuous gradient, Global Convergence Rate of Proximal Incremental Aggregated Gradient Methods, Proximal quasi-Newton methods for regularized convex optimization with linear and accelerated sublinear convergence rates, New nonasymptotic convergence rates of stochastic proximal point algorithm for stochastic convex optimization, On the Simplicity and Conditioning of Low Rank Semidefinite Programs, An Optimal-Storage Approach to Semidefinite Programming Using Approximate Complementarity, The restricted strong convexity revisited: analysis of equivalence to error bound and quadratic growth, RSG: Beating Subgradient Method without Smoothness and Strong Convexity, Convergence rates of an inertial gradient descent algorithm under growth and flatness conditions, On the linear convergence of forward-backward splitting method. I: Convergence analysis, Growth conditions on a function and the error bound condition, A linearly convergent majorized ADMM with indefinite proximal terms for convex composite programming and its applications, Sharpness, Restart, and Acceleration, The gradient projection algorithm for smooth sets and functions in nonconvex case, Level-set subdifferential error bounds and linear convergence of Bregman proximal gradient method, New analysis of linear convergence of gradient-type methods via unifying error bound conditions, Restarting the accelerated coordinate descent method with a rough strong convexity estimate, Characterization of solutions of strong-weak convex programming problems, Strong Metric (Sub)regularity of Karush–Kuhn–Tucker Mappings for Piecewise Linear-Quadratic Convex-Composite Optimization and the Quadratic Convergence of Newton’s Method, Convergence Rates for Deterministic and Stochastic Subgradient Methods without Lipschitz Continuity, Proximally Guided Stochastic Subgradient Method for Nonsmooth, Nonconvex Problems, Low-rank matrix recovery with composite optimization: good conditioning and rapid convergence, Stochastic (Approximate) Proximal Point Methods: Convergence, Optimality, and Adaptivity, Augmented Lagrangian method for second-order cone programs under second-order sufficiency, Variational analysis perspective on linear convergence of some first order methods for nonsmooth convex optimization problems, Convergence rates of forward-Douglas-Rachford splitting method, MultiLevel Composite Stochastic Optimization via Nested Variance Reduction, Variable Metric Forward-Backward Algorithm for Composite Minimization Problems, Proximal-Like Incremental Aggregated Gradient Method with Linear Convergence Under Bregman Distance Growth Conditions, Inertial proximal incremental aggregated gradient method with linear convergence guarantees, Efficiency of minimizing compositions of convex functions and smooth maps, Stochastic variance-reduced prox-linear algorithms for nonconvex composite optimization, Convergence rates for the heavy-ball continuous dynamics for non-convex optimization, under Polyak-Łojasiewicz condition, Quadratic Growth and Strong Metric Subregularity of the Subdifferential via Subgradient Graphical Derivative, Convergence Rate Analysis of a Sequential Convex Programming Method with Line Search for a Class of Constrained Difference-of-Convex Optimization Problems, Learnable Descent Algorithm for Nonsmooth Nonconvex Image Reconstruction, On Degenerate Doubly Nonnegative Projection Problems, High-Order Optimization Methods for Fully Composite Problems, Hölderian Error Bounds and Kurdyka-Łojasiewicz Inequality for the Trust Region Subproblem, Solving Nonsmooth and Nonconvex Compound Stochastic Programs with Applications to Risk Measure Minimization, Sufficient conditions for a minimum of a strongly quasiconvex function on a weakly convex set, A Study of Convex Convex-Composite Functions via Infimal Convolution with Applications, Linear convergence of prox-SVRG method for separable non-smooth convex optimization problems under bounded metric subregularity, Primal superlinear convergence of SQP methods in piecewise linear-quadratic composite optimization, Perturbation techniques for convergence analysis of proximal gradient method and other first-order algorithms via variational analysis
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- A proximal method for composite minimization
- Second-order variational analysis and characterizations of tilt-stable optimal solutions in infinite-dimensional spaces
- Approximation accuracy, gradient methods, and error bound for structured convex optimization
- Optimality conditions for non-finite valued convex composite functions
- Quadratic growth and critical point stability of semi-algebraic functions
- An extension of Luque's growth condition
- Strong uniqueness: A far-reaching criterion for the convergence analysis of iterative procedures
- On the convergence of von Neumann's alternating projection algorithm for two sets
- Error bounds and convergence analysis of feasible descent methods: A general approach
- Nonsmooth equations in optimization. Regularity, calculus, methods and applications
- Introductory lectures on convex optimization. A basic course.
- A Gauss-Newton method for convex composite optimization
- Nonsmooth optimization using Taylor-like models: error bounds, convergence, and termination criteria
- Efficiency of minimizing compositions of convex functions and smooth maps
- Nonlinear local error bounds via a change of metric
- Linear convergence of first order methods for non-strongly convex optimization
- Second-order characterizations of tilt stability with applications to nonlinear programming
- Introduction to Smooth Manifolds
- Sparse/Robust Estimation and Kalman Smoothing with Nonsmooth Log-Concave Densities: Modeling, Computation, and Theory
- Second-Order Variational Analysis in Conic Programming with Applications to Optimality and Stability
- Weak Sharp Minima in Mathematical Programming
- On the Evaluation Complexity of Composite Function Minimization with Applications to Nonconvex Nonlinear Programming
- Convergence of an Inexact Algorithm for Composite Nonsmooth Optimization
- Implicit Functions and Solution Mappings
- On the global convergence of trust region algorithms for unconstrained minimization
- On the superlinear convergence of a trust region algorithm for nonsmooth optimization
- Descent methods for composite nondifferentiable optimization problems
- Second order necessary and sufficient conditions for convex composite NDO
- A model algorithm for composite nondifferentiable optimization problems
- Monotone Operators and the Proximal Point Algorithm
- On the Convergence Rate of Dual Ascent Methods for Linearly Constrained Convex Minimization
- Variational Analysis
- Tilt Stability of a Local Minimum
- Metric regularity and subdifferential calculus
- Variational Analysis and Generalized Differentiation I
- The radius of metric regularity
- Prox-regular functions in variational analysis
- Second-Order Subdifferential Calculus with Applications to Tilt Stability in Optimization
- Second-order growth, tilt stability, and metric regularity of the subdifferential
- Tilt Stability, Uniform Quadratic Growth, and Strong Metric Regularity of the Subdifferential
- Regularization and Variable Selection Via the Elastic Net
- Algorithms in real algebraic geometry