On the Linear Convergence of Descent Methods for Convex Essentially Smooth Minimization
From MaRDI portal
Publication:3988952
DOI10.1137/0330025zbMath0756.90084OpenAlexW2090084467MaRDI QIDQ3988952
Publication date: 28 June 1992
Published in: SIAM Journal on Control and Optimization (Search for Journal in Brave)
Full work available at URL: http://hdl.handle.net/1721.1/3206
Related Items
Further properties of the forward-backward envelope with applications to difference-of-convex programming, Convergence results of a new monotone inertial forward-backward splitting algorithm under the local Hölder error bound condition, Convergence rate analysis of an asynchronous space decomposition method for convex Minimization, Error bounds for analytic systems and their applications, On the solution of convex QPQC problems with elliptic and other separable constraints with strong curvature, Error bounds for inconsistent linear inequalities and programs, On the Complexity Analysis of the Primal Solutions for the Accelerated Randomized Dual Coordinate Ascent, On the rate of convergence of projected Barzilai–Borwein methods, A class of iterative methods for solving nonlinear projection equations, Error estimates and Lipschitz constants for best approximation in continuous function spaces, Linearly convergent descent methods for the unconstrained minimization of convex quadratic splines, Error bounds in mathematical programming, On globally Q-linear convergence of a splitting method for group Lasso, Unnamed Item, A unified approach to error bounds for structured convex optimization problems, Approximation accuracy, gradient methods, and error bound for structured convex optimization, CKV-type \(B\)-matrices and error bounds for linear complementarity problems, A block symmetric Gauss-Seidel decomposition theorem for convex composite quadratic programming and its applications, General inertial proximal gradient method for a class of nonconvex nonsmooth optimization problems, Revisiting Stochastic Loss Networks: Structures and Approximations, Linear Convergence of Proximal Gradient Algorithm with Extrapolation for a Class of Nonconvex Nonsmooth Minimization Problems, A globally convergent proximal Newton-type method in nonsmooth convex optimization, Convergence of the forward-backward algorithm: beyond the worst-case with the help of geometry, A modified proximal gradient method for a family of nonsmooth convex optimization problems, Linear convergence analysis of the use of gradient projection methods on total variation problems, Global convergence of a modified gradient projection method for convex constrained problems, A Highly Efficient Semismooth Newton Augmented Lagrangian Method for Solving Lasso Problems, Error bound and isocost imply linear convergence of DCA-based algorithms to D-stationarity, Separable spherical constraints and the decrease of a quadratic function in the gradient projection step, A global piecewise smooth Newton method for fast large-scale model predictive control, On proximal gradient method for the convex problems regularized with the group reproducing kernel norm, On the linear convergence of a proximal gradient method for a class of nonsmooth convex minimization problems, On the linear convergence of the approximate proximal splitting method for non-smooth convex optimization, A family of inexact SQA methods for non-smooth convex minimization with provable convergence guarantees based on the Luo-Tseng error bound property, Decomposable norm minimization with proximal-gradient homotopy algorithm, On the linear convergence of the alternating direction method of multipliers, On a global error bound for a class of monotone affine variational inequality problems, The restricted strong convexity revisited: analysis of equivalence to error bound and quadratic growth, RSG: Beating Subgradient Method without Smoothness and Strong Convexity, On Faster Convergence of Cyclic Block Coordinate Descent-type Methods for Strongly Convex Minimization, A coordinate gradient descent method for nonsmooth separable minimization, Iteration complexity analysis of block coordinate descent methods, Convergence properties of nonmonotone spectral projected gradient methods, A proximal gradient descent method for the extended second-order cone linear complementarity problem, An optimal algorithm and superrelaxation for minimization of a quadratic function subject to separable convex constraints with applications, Nonconvex proximal incremental aggregated gradient method with linear convergence, Calculus of the exponent of Kurdyka-Łojasiewicz inequality and its applications to linear convergence of first-order methods, Error bounds for non-polyhedral convex optimization and applications to linear convergence of FDM and PGM, Unified framework of extragradient-type methods for pseudomonotone variational inequalities., Sufficient conditions for error bounds of difference functions and applications, A block coordinate variable metric forward-backward algorithm, A Fast Active Set Block Coordinate Descent Algorithm for $\ell_1$-Regularized Least Squares, A new error bound for linear complementarity problems with weakly chained diagonally dominant \(B\)-matrices, Bounded perturbation resilience of projected scaled gradient methods, A parallel line search subspace correction method for composite convex optimization, Convergence of splitting and Newton methods for complementarity problems: An application of some sensitivity results, New analysis of linear convergence of gradient-type methods via unifying error bound conditions, Block-coordinate gradient descent method for linearly constrained nonsmooth separable optimization, Projection onto a Polyhedron that Exploits Sparsity, A new gradient projection algorithm for convex minimization problem and its application to split feasibility problem, A Block Successive Upper-Bound Minimization Method of Multipliers for Linearly Constrained Convex Optimization, Linear Convergence of Descent Methods for the Unconstrained Minimization of Restricted Strongly Convex Functions, An efficient numerical method for the symmetric positive definite second-order cone linear complementarity problem, Variational analysis perspective on linear convergence of some first order methods for nonsmooth convex optimization problems, Convergence rates of forward-Douglas-Rachford splitting method, Accelerate stochastic subgradient method by leveraging local growth condition, Perturbation analysis of a condition number for convex inequality systems and global error bounds for analytic systems, An active-set algorithmic framework for non-convex optimization problems over the simplex, On the proximal Landweber Newton method for a class of nonsmooth convex problems, Error bounds and convergence analysis of feasible descent methods: A general approach, On Degenerate Doubly Nonnegative Projection Problems, A Global Dual Error Bound and Its Application to the Analysis of Linearly Constrained Nonconvex Optimization, Hölderian Error Bounds and Kurdyka-Łojasiewicz Inequality for the Trust Region Subproblem, An adaptive accelerated proximal gradient method and its homotopy continuation for sparse optimization, On the convergence of the coordinate descent method for convex differentiable minimization, Linear convergence of prox-SVRG method for separable non-smooth convex optimization problems under bounded metric subregularity, A unified description of iterative algorithms for traffic equilibria, An infeasible projection type algorithm for nonmonotone variational inequalities