From error bounds to the complexity of first-order descent methods for convex functions

From MaRDI portal
Revision as of 05:03, 1 February 2024 by Import240129110113 (talk | contribs) (Created automatically from import240129110113)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Publication:1675251

DOI10.1007/S10107-016-1091-6zbMath1373.90076arXiv1510.08234OpenAlexW2962826994MaRDI QIDQ1675251

Trong Phong Nguyen, Juan Peypouquet, Jérôme Bolte, Bruce W. Suter

Publication date: 27 October 2017

Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/1510.08234




Related Items (only showing first 100 items - show all)

Accelerating inexact successive quadratic approximation for regularized optimization through manifold identificationConvergence of an asynchronous block-coordinate forward-backward algorithm for convex composite optimizationQuadratic error bound of the smoothed gap and the restarted averaged primal-dual hybrid gradientFirst-order methods for convex optimizationOn optimal universal first-order methods for minimizing heterogeneous sumsOn continuous selections of polynomial functionsConvergence Rate Analysis of a Dykstra-Type Projection AlgorithmStochastic differential equations for modeling first order optimization methodsCoordinate descent methods beyond smoothness and separabilityEfficient iterative method for SOAV minimization problem with linear equality and box constraints and its linear convergenceConvergence results of a new monotone inertial forward-backward splitting algorithm under the local Hölder error bound conditionStochastic quasi-subgradient method for stochastic quasi-convex feasibility problemsPrimal necessary characterizations of transversality propertiesOn the Complexity Analysis of the Primal Solutions for the Accelerated Randomized Dual Coordinate AscentInterior quasi-subgradient method with non-Euclidean distances for constrained quasi-convex optimization problems in Hilbert spacesLocal convergence of the heavy-ball method and iPiano for non-convex optimizationImplicit error bounds for Picard iterations on Hilbert spacesScaled relative graphs: nonexpansive operators via 2D Euclidean geometryTensor Canonical Correlation Analysis With Convergence and Statistical GuaranteesA family of functional inequalities: Łojasiewicz inequalities and displacement convex functionsApproaching nonsmooth nonconvex optimization problems through first order dynamical systems with hidden acceleration and Hessian driven damping termsKurdyka-Łojasiewicz exponent via inf-projectionConstant step stochastic approximations involving differential inclusions: stability, long-run convergence and applicationsMultiple-sets split quasi-convex feasibility problems: Adaptive subgradient methods with convergence guaranteeNeural network for nonsmooth pseudoconvex optimization with general convex constraintsError bounds for parametric polynomial systems with applications to higher-order stability analysis and convergence ratesPerturbation of error boundsQuadratic growth conditions and uniqueness of optimal solution to LassoThe equivalence of three types of error bounds for weakly and approximately convex functionsA simple globally convergent algorithm for the nonsmooth nonconvex single source localization problemNewton acceleration on manifolds identified by proximal gradient methodsError bounds, facial residual functions and applications to the exponential coneFast convergence of inertial dynamics with Hessian-driven damping under geometry assumptionsNonlocal error bounds for piecewise affine functionsA speed restart scheme for a dynamics with Hessian-driven dampingOptimal non-asymptotic analysis of the Ruppert-Polyak averaging stochastic algorithmFISTA is an automatic geometrically optimized algorithm for strongly convex functionsThresholding gradient methods in Hilbert spaces: support identification and linear convergenceRadial duality. II: Applications and algorithmsDifferentiating Nonsmooth Solutions to Parametric Monotone Inclusion ProblemsScreening Rules and its Complexity for Active Set IdentificationOn the relationship between the Kurdyka-Łojasiewicz property and error bounds on Hadamard manifoldsLinear Convergence of a Proximal Alternating Minimization Method with Extrapolation for \(\boldsymbol{\ell_1}\) -Norm Principal Component AnalysisConvergence rates of the heavy-ball method under the Łojasiewicz propertyConvergence of the forward-backward algorithm: beyond the worst-case with the help of geometryRevisiting the approximate Carathéodory problem via the Frank-Wolfe algorithmVariance reduction for root-finding problemsSample Complexity of Sample Average Approximation for Conditional Stochastic OptimizationIterative regularization via dual diagonal descentOptimal convergence rates for damped inertial gradient dynamics with flat geometriesDual block-coordinate forward-backward algorithm with application to deconvolution and deinterlacing of video sequencesA simple nearly optimal restart scheme for speeding up first-order methodsGeneral Hölder smooth convergence rates follow from specialized rates assuming growth boundsExtragradient method in optimization: convergence and complexityActive Set Complexity of the Away-Step Frank--Wolfe AlgorithmNonsmooth optimization using Taylor-like models: error bounds, convergence, and termination criteriaRandom Function Iterations for Consistent Stochastic FeasibilityOn the Quadratic Convergence of the Cubic Regularization Method under a Local Error Bound ConditionLocal Minimizers of Semi-Algebraic Functions from the Viewpoint of TangenciesConvergence Rates of Damped Inertial Dynamics under Geometric Conditions and PerturbationsThe gradient projection algorithm for a proximally smooth set and a function with Lipschitz continuous gradientConvergence of inertial dynamics and proximal algorithms governed by maximally monotone operatorsConvergence rates for an inertial algorithm of gradient type associated to a smooth non-convex minimizationOn the proximal gradient algorithm with alternated inertiaThe restricted strong convexity revisited: analysis of equivalence to error bound and quadratic growthRSG: Beating Subgradient Method without Smoothness and Strong ConvexityConvergence rates of an inertial gradient descent algorithm under growth and flatness conditionsLinear convergence of inexact descent method and inexact proximal gradient algorithms for lower-order regularization problemsOn the interplay between acceleration and identification for the proximal gradient algorithmConstraint qualifications for Karush-Kuhn-Tucker conditions in multiobjective optimizationKurdyka-Łojasiewicz property of zero-norm composite functionsCalculus of the exponent of Kurdyka-Łojasiewicz inequality and its applications to linear convergence of first-order methodsNearly optimal first-order methods for convex optimization under gradient norm measure: an adaptive regularization approachOn the linear convergence of forward-backward splitting method. I: Convergence analysisOptimal Convergence Rates for Nesterov AccelerationSharpness, Restart, and AccelerationTransversality properties: primal sufficient conditionsLevel-set subdifferential error bounds and linear convergence of Bregman proximal gradient methodNew analysis of linear convergence of gradient-type methods via unifying error bound conditionsRestarting the accelerated coordinate descent method with a rough strong convexity estimateA sufficient condition for asymptotically well behaved property of convex polynomialsThe modified second APG method for DC optimization problemsConvergence Rates for Deterministic and Stochastic Subgradient Methods without Lipschitz ContinuityNovel Reformulations and Efficient Algorithms for the Generalized Trust Region SubproblemAlternating projections with applications to Gerchberg-Saxton error reductionVariational analysis perspective on linear convergence of some first order methods for nonsmooth convex optimization problemsModuli of regularity and rates of convergence for Fejér monotone sequencesConvergence rates of forward-Douglas-Rachford splitting methodAccelerate stochastic subgradient method by leveraging local growth conditionComputational approaches to non-convex, sparsity-inducing multi-penalty regularizationProximal-Like Incremental Aggregated Gradient Method with Linear Convergence Under Bregman Distance Growth ConditionsOn linear convergence of non-Euclidean gradient methods without strong convexity and Lipschitz gradient continuityDistributed block-diagonal approximation methods for regularized empirical risk minimizationConvergence rates of subgradient methods for quasi-convex optimization problemsInertial proximal incremental aggregated gradient method with linear convergence guaranteesQuadratic optimization with orthogonality constraint: explicit Łojasiewicz exponent and linear convergence of retraction-based line-search and stochastic variance-reduced gradient methodsCuriosities and counterexamples in smooth convex optimizationConvergence rates for the heavy-ball continuous dynamics for non-convex optimization, under Polyak-Łojasiewicz conditionA Bregman Forward-Backward Linesearch Algorithm for Nonconvex Composite Optimization: Superlinear Convergence to Nonisolated Local MinimaConvergence Rate Analysis of a Sequential Convex Programming Method with Line Search for a Class of Constrained Difference-of-Convex Optimization Problems


Uses Software



Cites Work




This page was built for publication: From error bounds to the complexity of first-order descent methods for convex functions