Performance of first-order methods for smooth convex minimization: a novel approach

From MaRDI portal
Revision as of 07:09, 2 February 2024 by Import240129110113 (talk | contribs) (Created automatically from import240129110113)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Publication:2248759

DOI10.1007/S10107-013-0653-0zbMath1300.90068arXiv1206.3209OpenAlexW1979896658MaRDI QIDQ2248759

Yoel Drori, Marc Teboulle

Publication date: 27 June 2014

Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/1206.3209






Related Items (77)

Accelerated proximal algorithms with a correction term for monotone inclusionsOptimized first-order methods for smooth convex minimizationInertial Proximal Alternating Linearized Minimization (iPALM) for Nonconvex and Nonsmooth ProblemsOptimal complexity and certification of Bregman first-order methodsA frequency-domain analysis of inexact gradient methodsAnalysis and Design of Optimization Algorithms via Integral Quadratic ConstraintsSynthesis of accelerated gradient algorithms for optimization and saddle point problems using Lyapunov functions and LMIsGradient descent technology for sparse vector learning in ontology algorithmsAn optimal variant of Kelley's cutting-plane methodGeneralizing the Optimized Gradient Method for Smooth Convex MinimizationOptimal deterministic algorithm generationAdaptive restart of the optimized gradient method for convex optimizationExact worst-case convergence rates of the proximal gradient method for composite convex minimizationFast proximal algorithms for nonsmooth convex optimizationAccelerated additive Schwarz methods for convex optimization with adaptive restartPotential Function-Based Framework for Minimizing Gradients in Convex and Min-Max OptimizationOn the worst-case complexity of the gradient method with exact line search for smooth strongly convex functionsFast gradient methods for uniformly convex and weakly smooth problemsThe exact worst-case convergence rate of the gradient method with fixed step lengths for \(L\)-smooth functionsBackward-forward-reflected-backward splitting for three operator monotone inclusionsiPiasco: inertial proximal algorithm for strongly convex optimizationOptimal error bounds for non-expansive fixed-point iterations in normed spacesAn optimal gradient method for smooth strongly convex minimizationProximal Splitting Algorithms for Convex Optimization: A Tour of Recent Advances, with New TwistsFactor-\(\sqrt{2}\) acceleration of accelerated gradient methodsUnnamed ItemConditions for linear convergence of the gradient method for non-convex optimizationRate of convergence of the Nesterov accelerated gradient method in the subcritical case α ≤ 3A Systematic Approach to Lyapunov Analyses of Continuous-Time Models in Convex OptimizationBranch-and-bound performance estimation programming: a unified methodology for constructing optimal optimization methodsOptimal step length for the maximal decrease of a self-concordant function by the Newton methodUnnamed ItemConvergence rate of a relaxed inertial proximal algorithm for convex minimizationAn elementary approach to tight worst case complexity analysis of gradient based methodsPrincipled analyses and design of first-order methods with inexact proximal operatorsConic linear optimization for computer-assisted proofs. Abstracts from the workshop held April 10--16, 2022Another Look at the Fast Iterative Shrinkage/Thresholding Algorithm (FISTA)Worst-Case Convergence Analysis of Inexact Gradient and Newton Methods Through Semidefinite Programming Performance EstimationEfficient first-order methods for convex minimization: a constructive approachOperator Splitting Performance Estimation: Tight Contraction Factors and Optimal Parameter SelectionRate of convergence of inertial gradient dynamics with time-dependent viscous damping coefficientAccelerated methods for saddle-point problemSubsampled Hessian Newton Methods for Supervised LearningTight Sublinear Convergence Rate of the Proximal Point Algorithm for Maximal Monotone Inclusion ProblemsAccelerated proximal point method for maximally monotone operatorsThe exact worst-case convergence rate of the alternating direction method of multipliersSmooth strongly convex interpolation and exact worst-case performance of first-order methodsThe exact information-based complexity of smooth convex minimizationOn the convergence analysis of the optimized gradient methodCyclic schemes for PDE-based image analysisFast convergence of generalized forward-backward algorithms for structured monotone inclusionsAnalysis of biased stochastic gradient descent using sequential semidefinite programsAccelerated forward–backward algorithms for structured monotone inclusionsOptimizing the efficiency of first-order methods for decreasing the gradient of smooth convex functionsAnalysis of Optimization Algorithms via Integral Quadratic Constraints: Nonstrongly Convex ProblemsBounds for the tracking error of first-order online optimization methodsAutomated tight Lyapunov analysis for first-order methodsGeneralized proximal point algorithms with correction terms and extrapolationRegularized nonlinear accelerationProvably faster gradient descent via long stepsTight ergodic sublinear convergence rate of the relaxed proximal point algorithm for monotone variational inequalitiesOn the rate of convergence of the difference-of-convex algorithm (DCA)Interpolation conditions for linear operators and applications to performance estimation problemsTwo new splitting methods for three-operator monotone inclusions in Hilbert spacesData-Driven Nonsmooth OptimizationA stochastic subspace approach to gradient-free optimization in high dimensionsConvergence rate analysis of the gradient descent–ascent method for convex–concave saddle-point problemsPEPIT: computer-assisted worst-case analyses of first-order optimization methods in pythonAnalysis of optimization algorithms via sum-of-squaresSeveral kinds of acceleration techniques for unconstrained optimization first-order algorithmsNew analysis of linear convergence of gradient-type methods via unifying error bound conditionsFinding the forward-Douglas-Rachford-forward methodSolving inverse problems using data-driven modelsRobust and structure exploiting optimisation algorithms: an integral quadratic constraint approachMultiply Accelerated Value Iteration for NonSymmetric Affine Fixed Point Problems and Application to Markov Decision ProcessesAn Optimal High-Order Tensor Method for Convex OptimizationExact Worst-Case Performance of First-Order Methods for Composite Convex Optimization


Uses Software



Cites Work




This page was built for publication: Performance of first-order methods for smooth convex minimization: a novel approach