Error bounds and convergence analysis of feasible descent methods: A general approach

From MaRDI portal
Revision as of 13:02, 31 January 2024 by Import240129110113 (talk | contribs) (Created automatically from import240129110113)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Publication:1312756

DOI10.1007/BF02096261zbMath0793.90076OpenAlexW2093417350MaRDI QIDQ1312756

Zhi-Quan Luo, Paul Tseng

Publication date: 7 February 1994

Published in: Annals of Operations Research (Search for Journal in Brave)

Full work available at URL: https://doi.org/10.1007/bf02096261






Related Items (only showing first 100 items - show all)

Some recent advances in projection-type methods for variational inequalitiesGlobally convergent block-coordinate techniques for unconstrained optimizationFurther properties of the forward-backward envelope with applications to difference-of-convex programmingLinearly convergent away-step conditional gradient for non-strongly convex functionsConvergence results of a new monotone inertial forward-backward splitting algorithm under the local Hölder error bound conditionSome developments in general variational inequalitiesNew trends in general variational inequalitiesConvergence rate analysis of an asynchronous space decomposition method for convex MinimizationParallel random block-coordinate forward-backward algorithm: a unified convergence analysisOn the solution of convex QPQC problems with elliptic and other separable constraints with strong curvatureA survey on some recent developments of alternating direction method of multipliersA family of second-order methods for convex \(\ell _1\)-regularized optimizationA stochastic extra-step quasi-Newton method for nonsmooth nonconvex optimizationOn linear convergence of iterative methods for the variational inequality problemParallel Random Coordinate Descent Method for Composite Minimization: Convergence Analysis and Error BoundsAn inexact Newton-like conditional gradient method for constrained nonlinear systemsAugmented Lagrangian methods for convex matrix optimization problemsError bounds in mathematical programmingKurdyka-Łojasiewicz exponent via inf-projectionUnnamed ItemConvergence analysis of perturbed feasible descent methodsFrom error bounds to the complexity of first-order descent methods for convex functionsA unified approach to error bounds for structured convex optimization problemsApproximation accuracy, gradient methods, and error bound for structured convex optimizationScaling up Bayesian variational inference using distributed computing clustersInexact first-order primal-dual algorithmsQuadratic growth conditions and uniqueness of optimal solution to LassoLinear convergence of first order methods for non-strongly convex optimizationA block symmetric Gauss-Seidel decomposition theorem for convex composite quadratic programming and its applicationsQuadratic Growth Conditions for Convex Matrix Optimization Problems Associated with Spectral FunctionsGeneral inertial proximal gradient method for a class of nonconvex nonsmooth optimization problemsLinear Convergence of Proximal Gradient Algorithm with Extrapolation for a Class of Nonconvex Nonsmooth Minimization ProblemsSuperrelaxation and the rate of convergence in minimizing quadratic functions subject to bound constraintsNonmonotone gradient methods for vector optimization with a portfolio optimization applicationConvergence of the forward-backward algorithm: beyond the worst-case with the help of geometryA coordinate descent margin based-twin support vector machine for classificationNonparallel hyperplane support vector machine for binary classification problemsA smoothing proximal gradient algorithm with extrapolation for the relaxation of \({\ell_0}\) regularization problemError bound and isocost imply linear convergence of DCA-based algorithms to D-stationarityExtragradient method in optimization: convergence and complexitySeparable spherical constraints and the decrease of a quadratic function in the gradient projection stepOn stochastic mirror-prox algorithms for stochastic Cartesian variational inequalities: randomized block coordinate and optimal averaging schemesA global piecewise smooth Newton method for fast large-scale model predictive controlThe 2-coordinate descent method for solving double-sided simplex constrained minimization problemsNonsmooth optimization using Taylor-like models: error bounds, convergence, and termination criteriaA quasi-Newton modified LP-Newton methodA linearly convergent stochastic recursive gradient method for convex optimizationOn convergence rate of the randomized Gauss-Seidel methodOn the Quadratic Convergence of the Cubic Regularization Method under a Local Error Bound ConditionOn the linear convergence of the approximate proximal splitting method for non-smooth convex optimizationAlternating minimization methods for strongly convex optimizationA family of inexact SQA methods for non-smooth convex minimization with provable convergence guarantees based on the Luo-Tseng error bound propertyThe convergence rate analysis of the symmetric ADMM for the nonconvex separable optimization problemsLocal linear convergence of an ADMM-type splitting framework for equality constrained optimizationOn the convergence of the gradient projection method for convex optimal control problems with bang-bang solutionsRSG: Beating Subgradient Method without Smoothness and Strong ConvexityA coordinate gradient descent method for nonsmooth separable minimizationIncrementally updated gradient methods for constrained and regularized optimizationIteration complexity analysis of block coordinate descent methodsA coordinate gradient descent method for \(\ell_{1}\)-regularized convex minimizationGlobal Lipschitzian error bounds for semidefinite complementarity problems with emphasis on NCPsA proximal gradient descent method for the extended second-order cone linear complementarity problemA coordinate gradient descent method for linearly constrained smooth optimization and support vector machines trainingNew characterizations of Hoffman constants for systems of linear constraintsAn optimal algorithm and superrelaxation for minimization of a quadratic function subject to separable convex constraints with applicationsResolving learning rates adaptively by locating stochastic non-negative associated gradient projection points using line searchesLocal linear convergence of the alternating direction method of multipliers for nonconvex separable optimization problemsCalculus of the exponent of Kurdyka-Łojasiewicz inequality and its applications to linear convergence of first-order methodsError bounds for non-polyhedral convex optimization and applications to linear convergence of FDM and PGMRate of convergence analysis of dual-based variables decomposition methods for strongly convex problemsOn the linear convergence of forward-backward splitting method. I: Convergence analysisOver relaxed hybrid proximal extragradient algorithm and its application to several operator splitting methodsAn efficient Hessian based algorithm for solving large-scale sparse group Lasso problemsA parallel line search subspace correction method for composite convex optimizationFast global convergence of gradient methods for high-dimensional statistical recoveryComputational Methods for Solving Nonconvex Block-Separable Constrained Quadratic ProblemsNew analysis of linear convergence of gradient-type methods via unifying error bound conditionsFaster subgradient methods for functions with Hölderian growthRestarting the accelerated coordinate descent method with a rough strong convexity estimateBlock-coordinate gradient descent method for linearly constrained nonsmooth separable optimizationRandomized Gradient Boosting MachineLocal convergence analysis of projection-type algorithms: unified approachProjection onto a Polyhedron that Exploits SparsityA Block Successive Upper-Bound Minimization Method of Multipliers for Linearly Constrained Convex OptimizationLinear Convergence of Descent Methods for the Unconstrained Minimization of Restricted Strongly Convex FunctionsAccelerated iterative hard thresholding algorithm for \(l_0\) regularized regression problemDistributed optimization for degenerate loss functions arising from over-parameterizationVariational analysis perspective on linear convergence of some first order methods for nonsmooth convex optimization problemsConvergence rates of forward-Douglas-Rachford splitting methodGrowth behavior of a class of merit functions for the nonlinear complementarity problemQuadratic optimization with orthogonality constraint: explicit Łojasiewicz exponent and linear convergence of retraction-based line-search and stochastic variance-reduced gradient methodsQPALM: a proximal augmented Lagrangian method for nonconvex quadratic programsCoordinate descent algorithmsNetwork manipulation algorithm based on inexact alternating minimizationThe adventures of a simple algorithmAvoiding bad steps in Frank-Wolfe variantsOn the convergence of exact distributed generalisation and acceleration algorithm for convex optimisationA reduced proximal-point homotopy method for large-scale non-convex BQPOn the rate of convergence of alternating minimization for non-smooth non-strongly convex optimization in Banach spacesPerturbation techniques for convergence analysis of proximal gradient method and other first-order algorithms via variational analysis




Cites Work




This page was built for publication: Error bounds and convergence analysis of feasible descent methods: A general approach