Error Bounds, Quadratic Growth, and Linear Convergence of Proximal Methods

From MaRDI portal
Publication:5219676


DOI10.1287/moor.2017.0889zbMath1440.90046arXiv1602.06661OpenAlexW2963353224MaRDI QIDQ5219676

Adrian S. Lewis, Dmitriy Drusvyatskiy

Publication date: 12 March 2020

Published in: Mathematics of Operations Research (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/1602.06661



Related Items

A Unified Analysis of Descent Sequences in Weakly Convex Optimization, Including Convergence Rates for Bundle Methods, Convergence results of a new monotone inertial forward-backward splitting algorithm under the local Hölder error bound condition, Parallel random block-coordinate forward-backward algorithm: a unified convergence analysis, On the Complexity Analysis of the Primal Solutions for the Accelerated Randomized Dual Coordinate Ascent, Proximal methods avoid active strict saddles of weakly convex functions, A stochastic extra-step quasi-Newton method for nonsmooth nonconvex optimization, Scaled relative graphs: nonexpansive operators via 2D Euclidean geometry, Global convergence of model function based Bregman proximal minimization algorithms, Error bound conditions and convergence of optimization methods on smooth and proximally smooth manifolds, Stability of minimization problems and the error bound condition, Quadratic growth and strong metric subregularity of the subdifferential for a class of non-prox-regular functions, Stochastic Multilevel Composition Optimization Algorithms with Level-Independent Convergence Rates, Stochastic Methods for Composite and Weakly Convex Optimization Problems, Kurdyka-Łojasiewicz exponent via inf-projection, On the Convergence of Stochastic Primal-Dual Hybrid Gradient, Golden Ratio Primal-Dual Algorithm with Linesearch, Quadratic growth conditions and uniqueness of optimal solution to Lasso, The equivalence of three types of error bounds for weakly and approximately convex functions, Linear convergence of first order methods for non-strongly convex optimization, Linear convergence of Frank-Wolfe for rank-one matrix recovery without strong convexity, A diagonal finite element-projection-proximal gradient algorithm for elliptic optimal control problem, Inexact successive quadratic approximation for regularized optimization, Non-smooth non-convex Bregman minimization: unification and new algorithms, The multiproximal linearization method for convex composite problems, Linearly-convergent FISTA variant for composite optimization with duality, Revisiting Spectral Bundle Methods: Primal-Dual (Sub)linear Convergence Rates, Trimmed Statistical Estimation via Variance Reduction, Thresholding gradient methods in Hilbert spaces: support identification and linear convergence, A globally convergent proximal Newton-type method in nonsmooth convex optimization, A strict complementarity approach to error bound and sensitivity of solution of conic programs, Consistent approximations in composite optimization, Accelerating inexact successive quadratic approximation for regularized optimization through manifold identification, Convergence of an asynchronous block-coordinate forward-backward algorithm for convex composite optimization, Calculus rules of the generalized concave Kurdyka-Łojasiewicz property, Quadratic error bound of the smoothed gap and the restarted averaged primal-dual hybrid gradient, Proximal gradient/semismooth Newton methods for projection onto a polyhedron via the duality-gap-active-set strategy, Convergence Rate Analysis of a Dykstra-Type Projection Algorithm, Sample Complexity of Sample Average Approximation for Conditional Stochastic Optimization, Linearized proximal algorithms with adaptive stepsizes for convex composite optimization with applications, Error bound and isocost imply linear convergence of DCA-based algorithms to D-stationarity, No Gap Second-Order Optimality Conditions for Circular Conic Programs, Stochastic Model-Based Minimization of Weakly Convex Functions, On the gradient projection method for weakly convex functions on a proximally smooth set, Nonsmooth optimization using Taylor-like models: error bounds, convergence, and termination criteria, The gradient projection algorithm for a proximally smooth set and a function with Lipschitz continuous gradient, Global Convergence Rate of Proximal Incremental Aggregated Gradient Methods, Proximal quasi-Newton methods for regularized convex optimization with linear and accelerated sublinear convergence rates, New nonasymptotic convergence rates of stochastic proximal point algorithm for stochastic convex optimization, On the Simplicity and Conditioning of Low Rank Semidefinite Programs, An Optimal-Storage Approach to Semidefinite Programming Using Approximate Complementarity, The restricted strong convexity revisited: analysis of equivalence to error bound and quadratic growth, RSG: Beating Subgradient Method without Smoothness and Strong Convexity, Convergence rates of an inertial gradient descent algorithm under growth and flatness conditions, On the linear convergence of forward-backward splitting method. I: Convergence analysis, Growth conditions on a function and the error bound condition, A linearly convergent majorized ADMM with indefinite proximal terms for convex composite programming and its applications, Sharpness, Restart, and Acceleration, The gradient projection algorithm for smooth sets and functions in nonconvex case, Level-set subdifferential error bounds and linear convergence of Bregman proximal gradient method, New analysis of linear convergence of gradient-type methods via unifying error bound conditions, Restarting the accelerated coordinate descent method with a rough strong convexity estimate, Characterization of solutions of strong-weak convex programming problems, Strong Metric (Sub)regularity of Karush–Kuhn–Tucker Mappings for Piecewise Linear-Quadratic Convex-Composite Optimization and the Quadratic Convergence of Newton’s Method, Convergence Rates for Deterministic and Stochastic Subgradient Methods without Lipschitz Continuity, Proximally Guided Stochastic Subgradient Method for Nonsmooth, Nonconvex Problems, Low-rank matrix recovery with composite optimization: good conditioning and rapid convergence, Stochastic (Approximate) Proximal Point Methods: Convergence, Optimality, and Adaptivity, Augmented Lagrangian method for second-order cone programs under second-order sufficiency, Variational analysis perspective on linear convergence of some first order methods for nonsmooth convex optimization problems, Convergence rates of forward-Douglas-Rachford splitting method, MultiLevel Composite Stochastic Optimization via Nested Variance Reduction, Variable Metric Forward-Backward Algorithm for Composite Minimization Problems, Proximal-Like Incremental Aggregated Gradient Method with Linear Convergence Under Bregman Distance Growth Conditions, Inertial proximal incremental aggregated gradient method with linear convergence guarantees, Efficiency of minimizing compositions of convex functions and smooth maps, Stochastic variance-reduced prox-linear algorithms for nonconvex composite optimization, Convergence rates for the heavy-ball continuous dynamics for non-convex optimization, under Polyak-Łojasiewicz condition, Quadratic Growth and Strong Metric Subregularity of the Subdifferential via Subgradient Graphical Derivative, Convergence Rate Analysis of a Sequential Convex Programming Method with Line Search for a Class of Constrained Difference-of-Convex Optimization Problems, Learnable Descent Algorithm for Nonsmooth Nonconvex Image Reconstruction, On Degenerate Doubly Nonnegative Projection Problems, High-Order Optimization Methods for Fully Composite Problems, Hölderian Error Bounds and Kurdyka-Łojasiewicz Inequality for the Trust Region Subproblem, Solving Nonsmooth and Nonconvex Compound Stochastic Programs with Applications to Risk Measure Minimization, Sufficient conditions for a minimum of a strongly quasiconvex function on a weakly convex set, A Study of Convex Convex-Composite Functions via Infimal Convolution with Applications, Linear convergence of prox-SVRG method for separable non-smooth convex optimization problems under bounded metric subregularity, Primal superlinear convergence of SQP methods in piecewise linear-quadratic composite optimization, Perturbation techniques for convergence analysis of proximal gradient method and other first-order algorithms via variational analysis



Cites Work