On the Linear Convergence of Descent Methods for Convex Essentially Smooth Minimization

From MaRDI portal
Revision as of 00:09, 30 January 2024 by Import240129110155 (talk | contribs) (Created automatically from import240129110155)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Publication:3988952

DOI10.1137/0330025zbMath0756.90084OpenAlexW2090084467MaRDI QIDQ3988952

Zhi-Quan Luo, Paul Tseng

Publication date: 28 June 1992

Published in: SIAM Journal on Control and Optimization (Search for Journal in Brave)

Full work available at URL: http://hdl.handle.net/1721.1/3206




Related Items (78)

Further properties of the forward-backward envelope with applications to difference-of-convex programmingConvergence results of a new monotone inertial forward-backward splitting algorithm under the local Hölder error bound conditionConvergence rate analysis of an asynchronous space decomposition method for convex MinimizationError bounds for analytic systems and their applicationsOn the solution of convex QPQC problems with elliptic and other separable constraints with strong curvatureError bounds for inconsistent linear inequalities and programsOn the Complexity Analysis of the Primal Solutions for the Accelerated Randomized Dual Coordinate AscentOn the rate of convergence of projected Barzilai–Borwein methodsA class of iterative methods for solving nonlinear projection equationsError estimates and Lipschitz constants for best approximation in continuous function spacesLinearly convergent descent methods for the unconstrained minimization of convex quadratic splinesError bounds in mathematical programmingOn globally Q-linear convergence of a splitting method for group LassoUnnamed ItemA unified approach to error bounds for structured convex optimization problemsApproximation accuracy, gradient methods, and error bound for structured convex optimizationCKV-type \(B\)-matrices and error bounds for linear complementarity problemsA block symmetric Gauss-Seidel decomposition theorem for convex composite quadratic programming and its applicationsGeneral inertial proximal gradient method for a class of nonconvex nonsmooth optimization problemsRevisiting Stochastic Loss Networks: Structures and ApproximationsLinear Convergence of Proximal Gradient Algorithm with Extrapolation for a Class of Nonconvex Nonsmooth Minimization ProblemsA globally convergent proximal Newton-type method in nonsmooth convex optimizationConvergence of the forward-backward algorithm: beyond the worst-case with the help of geometryA modified proximal gradient method for a family of nonsmooth convex optimization problemsLinear convergence analysis of the use of gradient projection methods on total variation problemsGlobal convergence of a modified gradient projection method for convex constrained problemsA Highly Efficient Semismooth Newton Augmented Lagrangian Method for Solving Lasso ProblemsError bound and isocost imply linear convergence of DCA-based algorithms to D-stationaritySeparable spherical constraints and the decrease of a quadratic function in the gradient projection stepA global piecewise smooth Newton method for fast large-scale model predictive controlOn proximal gradient method for the convex problems regularized with the group reproducing kernel normOn the linear convergence of a proximal gradient method for a class of nonsmooth convex minimization problemsOn the linear convergence of the approximate proximal splitting method for non-smooth convex optimizationA family of inexact SQA methods for non-smooth convex minimization with provable convergence guarantees based on the Luo-Tseng error bound propertyDecomposable norm minimization with proximal-gradient homotopy algorithmOn the linear convergence of the alternating direction method of multipliersOn a global error bound for a class of monotone affine variational inequality problemsThe restricted strong convexity revisited: analysis of equivalence to error bound and quadratic growthRSG: Beating Subgradient Method without Smoothness and Strong ConvexityOn Faster Convergence of Cyclic Block Coordinate Descent-type Methods for Strongly Convex MinimizationA coordinate gradient descent method for nonsmooth separable minimizationIteration complexity analysis of block coordinate descent methodsConvergence properties of nonmonotone spectral projected gradient methodsA proximal gradient descent method for the extended second-order cone linear complementarity problemAn optimal algorithm and superrelaxation for minimization of a quadratic function subject to separable convex constraints with applicationsNonconvex proximal incremental aggregated gradient method with linear convergenceCalculus of the exponent of Kurdyka-Łojasiewicz inequality and its applications to linear convergence of first-order methodsError bounds for non-polyhedral convex optimization and applications to linear convergence of FDM and PGMUnified framework of extragradient-type methods for pseudomonotone variational inequalities.Sufficient conditions for error bounds of difference functions and applicationsA block coordinate variable metric forward-backward algorithmA Fast Active Set Block Coordinate Descent Algorithm for $\ell_1$-Regularized Least SquaresA new error bound for linear complementarity problems with weakly chained diagonally dominant \(B\)-matricesBounded perturbation resilience of projected scaled gradient methodsA parallel line search subspace correction method for composite convex optimizationConvergence of splitting and Newton methods for complementarity problems: An application of some sensitivity resultsNew analysis of linear convergence of gradient-type methods via unifying error bound conditionsBlock-coordinate gradient descent method for linearly constrained nonsmooth separable optimizationProjection onto a Polyhedron that Exploits SparsityA new gradient projection algorithm for convex minimization problem and its application to split feasibility problemA Block Successive Upper-Bound Minimization Method of Multipliers for Linearly Constrained Convex OptimizationLinear Convergence of Descent Methods for the Unconstrained Minimization of Restricted Strongly Convex FunctionsAn efficient numerical method for the symmetric positive definite second-order cone linear complementarity problemVariational analysis perspective on linear convergence of some first order methods for nonsmooth convex optimization problemsConvergence rates of forward-Douglas-Rachford splitting methodAccelerate stochastic subgradient method by leveraging local growth conditionPerturbation analysis of a condition number for convex inequality systems and global error bounds for analytic systemsAn active-set algorithmic framework for non-convex optimization problems over the simplexOn the proximal Landweber Newton method for a class of nonsmooth convex problemsError bounds and convergence analysis of feasible descent methods: A general approachOn Degenerate Doubly Nonnegative Projection ProblemsA Global Dual Error Bound and Its Application to the Analysis of Linearly Constrained Nonconvex OptimizationHölderian Error Bounds and Kurdyka-Łojasiewicz Inequality for the Trust Region SubproblemAn adaptive accelerated proximal gradient method and its homotopy continuation for sparse optimizationOn the convergence of the coordinate descent method for convex differentiable minimizationLinear convergence of prox-SVRG method for separable non-smooth convex optimization problems under bounded metric subregularityA unified description of iterative algorithms for traffic equilibriaAn infeasible projection type algorithm for nonmonotone variational inequalities







This page was built for publication: On the Linear Convergence of Descent Methods for Convex Essentially Smooth Minimization