Error bounds and convergence analysis of feasible descent methods: A general approach
From MaRDI portal
Publication:1312756
DOI10.1007/BF02096261zbMath0793.90076OpenAlexW2093417350MaRDI QIDQ1312756
Publication date: 7 February 1994
Published in: Annals of Operations Research (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/bf02096261
Nonlinear programming (90C30) Computational methods for problems pertaining to operations research and mathematical programming (90-08)
Related Items (only showing first 100 items - show all)
Some recent advances in projection-type methods for variational inequalities ⋮ Globally convergent block-coordinate techniques for unconstrained optimization ⋮ Further properties of the forward-backward envelope with applications to difference-of-convex programming ⋮ Linearly convergent away-step conditional gradient for non-strongly convex functions ⋮ Convergence results of a new monotone inertial forward-backward splitting algorithm under the local Hölder error bound condition ⋮ Some developments in general variational inequalities ⋮ New trends in general variational inequalities ⋮ Convergence rate analysis of an asynchronous space decomposition method for convex Minimization ⋮ Parallel random block-coordinate forward-backward algorithm: a unified convergence analysis ⋮ On the solution of convex QPQC problems with elliptic and other separable constraints with strong curvature ⋮ A survey on some recent developments of alternating direction method of multipliers ⋮ A family of second-order methods for convex \(\ell _1\)-regularized optimization ⋮ A stochastic extra-step quasi-Newton method for nonsmooth nonconvex optimization ⋮ On linear convergence of iterative methods for the variational inequality problem ⋮ Parallel Random Coordinate Descent Method for Composite Minimization: Convergence Analysis and Error Bounds ⋮ An inexact Newton-like conditional gradient method for constrained nonlinear systems ⋮ Augmented Lagrangian methods for convex matrix optimization problems ⋮ Error bounds in mathematical programming ⋮ Kurdyka-Łojasiewicz exponent via inf-projection ⋮ Unnamed Item ⋮ Convergence analysis of perturbed feasible descent methods ⋮ From error bounds to the complexity of first-order descent methods for convex functions ⋮ A unified approach to error bounds for structured convex optimization problems ⋮ Approximation accuracy, gradient methods, and error bound for structured convex optimization ⋮ Scaling up Bayesian variational inference using distributed computing clusters ⋮ Inexact first-order primal-dual algorithms ⋮ Quadratic growth conditions and uniqueness of optimal solution to Lasso ⋮ Linear convergence of first order methods for non-strongly convex optimization ⋮ A block symmetric Gauss-Seidel decomposition theorem for convex composite quadratic programming and its applications ⋮ Quadratic Growth Conditions for Convex Matrix Optimization Problems Associated with Spectral Functions ⋮ General inertial proximal gradient method for a class of nonconvex nonsmooth optimization problems ⋮ Linear Convergence of Proximal Gradient Algorithm with Extrapolation for a Class of Nonconvex Nonsmooth Minimization Problems ⋮ Superrelaxation and the rate of convergence in minimizing quadratic functions subject to bound constraints ⋮ Nonmonotone gradient methods for vector optimization with a portfolio optimization application ⋮ Convergence of the forward-backward algorithm: beyond the worst-case with the help of geometry ⋮ A coordinate descent margin based-twin support vector machine for classification ⋮ Nonparallel hyperplane support vector machine for binary classification problems ⋮ A smoothing proximal gradient algorithm with extrapolation for the relaxation of \({\ell_0}\) regularization problem ⋮ Error bound and isocost imply linear convergence of DCA-based algorithms to D-stationarity ⋮ Extragradient method in optimization: convergence and complexity ⋮ Separable spherical constraints and the decrease of a quadratic function in the gradient projection step ⋮ On stochastic mirror-prox algorithms for stochastic Cartesian variational inequalities: randomized block coordinate and optimal averaging schemes ⋮ A global piecewise smooth Newton method for fast large-scale model predictive control ⋮ The 2-coordinate descent method for solving double-sided simplex constrained minimization problems ⋮ Nonsmooth optimization using Taylor-like models: error bounds, convergence, and termination criteria ⋮ A quasi-Newton modified LP-Newton method ⋮ A linearly convergent stochastic recursive gradient method for convex optimization ⋮ On convergence rate of the randomized Gauss-Seidel method ⋮ On the Quadratic Convergence of the Cubic Regularization Method under a Local Error Bound Condition ⋮ On the linear convergence of the approximate proximal splitting method for non-smooth convex optimization ⋮ Alternating minimization methods for strongly convex optimization ⋮ A family of inexact SQA methods for non-smooth convex minimization with provable convergence guarantees based on the Luo-Tseng error bound property ⋮ The convergence rate analysis of the symmetric ADMM for the nonconvex separable optimization problems ⋮ Local linear convergence of an ADMM-type splitting framework for equality constrained optimization ⋮ On the convergence of the gradient projection method for convex optimal control problems with bang-bang solutions ⋮ RSG: Beating Subgradient Method without Smoothness and Strong Convexity ⋮ A coordinate gradient descent method for nonsmooth separable minimization ⋮ Incrementally updated gradient methods for constrained and regularized optimization ⋮ Iteration complexity analysis of block coordinate descent methods ⋮ A coordinate gradient descent method for \(\ell_{1}\)-regularized convex minimization ⋮ Global Lipschitzian error bounds for semidefinite complementarity problems with emphasis on NCPs ⋮ A proximal gradient descent method for the extended second-order cone linear complementarity problem ⋮ A coordinate gradient descent method for linearly constrained smooth optimization and support vector machines training ⋮ New characterizations of Hoffman constants for systems of linear constraints ⋮ An optimal algorithm and superrelaxation for minimization of a quadratic function subject to separable convex constraints with applications ⋮ Resolving learning rates adaptively by locating stochastic non-negative associated gradient projection points using line searches ⋮ Local linear convergence of the alternating direction method of multipliers for nonconvex separable optimization problems ⋮ Calculus of the exponent of Kurdyka-Łojasiewicz inequality and its applications to linear convergence of first-order methods ⋮ Error bounds for non-polyhedral convex optimization and applications to linear convergence of FDM and PGM ⋮ Rate of convergence analysis of dual-based variables decomposition methods for strongly convex problems ⋮ On the linear convergence of forward-backward splitting method. I: Convergence analysis ⋮ Over relaxed hybrid proximal extragradient algorithm and its application to several operator splitting methods ⋮ An efficient Hessian based algorithm for solving large-scale sparse group Lasso problems ⋮ A parallel line search subspace correction method for composite convex optimization ⋮ Fast global convergence of gradient methods for high-dimensional statistical recovery ⋮ Computational Methods for Solving Nonconvex Block-Separable Constrained Quadratic Problems ⋮ New analysis of linear convergence of gradient-type methods via unifying error bound conditions ⋮ Faster subgradient methods for functions with Hölderian growth ⋮ Restarting the accelerated coordinate descent method with a rough strong convexity estimate ⋮ Block-coordinate gradient descent method for linearly constrained nonsmooth separable optimization ⋮ Randomized Gradient Boosting Machine ⋮ Local convergence analysis of projection-type algorithms: unified approach ⋮ Projection onto a Polyhedron that Exploits Sparsity ⋮ A Block Successive Upper-Bound Minimization Method of Multipliers for Linearly Constrained Convex Optimization ⋮ Linear Convergence of Descent Methods for the Unconstrained Minimization of Restricted Strongly Convex Functions ⋮ Accelerated iterative hard thresholding algorithm for \(l_0\) regularized regression problem ⋮ Distributed optimization for degenerate loss functions arising from over-parameterization ⋮ Variational analysis perspective on linear convergence of some first order methods for nonsmooth convex optimization problems ⋮ Convergence rates of forward-Douglas-Rachford splitting method ⋮ Growth behavior of a class of merit functions for the nonlinear complementarity problem ⋮ Quadratic optimization with orthogonality constraint: explicit Łojasiewicz exponent and linear convergence of retraction-based line-search and stochastic variance-reduced gradient methods ⋮ QPALM: a proximal augmented Lagrangian method for nonconvex quadratic programs ⋮ Coordinate descent algorithms ⋮ Network manipulation algorithm based on inexact alternating minimization ⋮ The adventures of a simple algorithm ⋮ Avoiding bad steps in Frank-Wolfe variants ⋮ On the convergence of exact distributed generalisation and acceleration algorithm for convex optimisation ⋮ A reduced proximal-point homotopy method for large-scale non-convex BQP ⋮ On the rate of convergence of alternating minimization for non-smooth non-strongly convex optimization in Banach spaces ⋮ Perturbation techniques for convergence analysis of proximal gradient method and other first-order algorithms via variational analysis
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- On the convergence properties of Hildreth's quadratic programming algorithm
- Error bounds for nondegenerate monotone linear complementarity problems
- Error bounds for the linear complementarity problem with a P-matrix
- On the gradient-projection method for solving the nonsymmetric linear complementarity problem
- More results on the convergence of iterative methods for the symmetric linear complementarity problem
- Sparsity-preserving SOR algorithms for separable quadratic and linear programming
- Variable metric gradient projection processes in convex feasible sets defined by nonlinear inequalities
- Error bounds for strongly convex programs and (super)linearly convergent iterative schemes for the least 2-norm solution of linear programs
- On the convergence of a basic iterative method for the implicit complementarity problem
- Global error bounds for monotone affine variational inequality problems
- On a global error bound for a class of monotone affine variational inequality problems
- Solution of symmetric linear complementarity problems by iterative methods
- On the convergence of the coordinate descent method for convex differentiable minimization
- On the convergence of projected gradient processes to singular critical points
- Necessary and sufficient conditions for the convergence of iterative methods for the linear complementarity problem
- Iterative Methods for Large Convex Quadratic Programs: A Survey
- Dual Ascent Methods for Problems with Strictly Convex Costs and Linear Constraints: A Unified Approach
- Application Of Khobotov’s Algorithm To Variational Inequalities And Network Equilibrium Problems
- Two-Metric Projection Methods for Constrained Optimization
- On the Convergence of a Matrix Splitting Algorithm for the Symmetric Monotone Linear Complementarity Problem
- A Subspace Decomposition Principle for Scaled Gradient Projection Methods: Global Theory
- Distributed asynchronous optimal routing in data networks
- Distributed asynchronous deterministic and stochastic gradient optimization algorithms
- Modification of the extra-gradient method for solving variational inequalities and certain optimization problems
- Some continuity properties of polyhedral multifunctions
- Projection methods for variational inequalities with application to the traffic assignment problem
- Global and Asymptotic Convergence Rate Estimates for a Class of Projected Gradient Processes
- Generalized equations and their solutions, part II: Applications to nonlinear programming
- On the Convergence of the Proximal Point Algorithm for Convex Minimization
- On the Linear Convergence of Descent Methods for Convex Essentially Smooth Minimization
- On Dual Convergence and the Rate of Primal Convergence of Bregman’s Convex Programming Method
- On the Rate of Convergence of a Partially Asynchronous Gradient Projection Algorithm
- Convergence of Iterates of an Inexact Matrix Splitting Algorithm for the Symmetric Monotone Linear Complementarity Problem
- Error Bound and Convergence Analysis of Matrix Splitting Algorithms for the Affine Variational Inequality Problem
- On the Goldstein-Levitin-Polyak gradient projection method
- Combined Primal–Dual and Penalty Methods for Convex Programming
- Monotone Operators and the Proximal Point Algorithm
- Augmented Lagrangians and Applications of the Proximal Point Algorithm in Convex Programming
- On the Convergence Rate of Dual Ascent Methods for Linearly Constrained Convex Minimization
- Remarks on Convergence of the Matrix Splitting Algorithm for the Symmetric Linear Complementarity Problem
- Error bounds for monotone linear complementarity problems
- Inexact Newton methods for the nonlinear complementarity problem
- Projected Newton Methods for Optimization Problems with Simple Constraints
- Minimizing Certain Convex Functions
- On the Solution of Singular and Semidefinite Linear Systems by Iteration
- Convex programming in Hilbert space
- The Solution of a Quadratic Programming Problem Using Systematic Overrelaxation
This page was built for publication: Error bounds and convergence analysis of feasible descent methods: A general approach