Smooth strongly convex interpolation and exact worst-case performance of first-order methods

From MaRDI portal
Publication:507324

DOI10.1007/s10107-016-1009-3zbMath1359.90098arXiv1502.05666OpenAlexW1962121538MaRDI QIDQ507324

François Glineur, Adrien B. Taylor, Julien M. Hendrickx

Publication date: 3 February 2017

Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/1502.05666




Related Items (49)

On the convergence rate of the Halpern-iterationParallel random block-coordinate forward-backward algorithm: a unified convergence analysisOptimized first-order methods for smooth convex minimizationOptimal complexity and certification of Bregman first-order methodsScaled relative graphs: nonexpansive operators via 2D Euclidean geometryA frequency-domain analysis of inexact gradient methodsSynthesis of accelerated gradient algorithms for optimization and saddle point problems using Lyapunov functions and LMIsOn the Properties of Convex Functions over Open SetsA note on approximate accelerated forward-backward methods with absolute and relative errors, and possibly strongly convex objectivesFinitely determined functionsGeneralizing the Optimized Gradient Method for Smooth Convex MinimizationExact worst-case convergence rates of the proximal gradient method for composite convex minimizationSurrogate-based distributed optimisation for expensive black-box functionsPotential Function-Based Framework for Minimizing Gradients in Convex and Min-Max OptimizationOn the worst-case complexity of the gradient method with exact line search for smooth strongly convex functionsThe exact worst-case convergence rate of the gradient method with fixed step lengths for \(L\)-smooth functionsBackward-forward-reflected-backward splitting for three operator monotone inclusionsAn optimal gradient method for smooth strongly convex minimizationFactor-\(\sqrt{2}\) acceleration of accelerated gradient methodsAccelerated gradient methods with absolute and relative noise in the gradientConditions for linear convergence of the gradient method for non-convex optimizationA Systematic Approach to Lyapunov Analyses of Continuous-Time Models in Convex OptimizationBranch-and-bound performance estimation programming: a unified methodology for constructing optimal optimization methodsUnnamed ItemAn elementary approach to tight worst case complexity analysis of gradient based methodsPrincipled analyses and design of first-order methods with inexact proximal operatorsConic linear optimization for computer-assisted proofs. Abstracts from the workshop held April 10--16, 2022Structured \((\min ,+)\)-convolution and its applications for the shortest/closest vector and nonlinear knapsack problemsAnother Look at the Fast Iterative Shrinkage/Thresholding Algorithm (FISTA)Worst-Case Convergence Analysis of Inexact Gradient and Newton Methods Through Semidefinite Programming Performance EstimationEfficient first-order methods for convex minimization: a constructive approachHalting time is predictable for large models: a universality property and average-case analysisOperator Splitting Performance Estimation: Tight Contraction Factors and Optimal Parameter SelectionAccelerated methods for saddle-point problemA generic online acceleration scheme for optimization algorithms via relaxation and inertiaTight Sublinear Convergence Rate of the Proximal Point Algorithm for Maximal Monotone Inclusion ProblemsAccelerated proximal point method for maximally monotone operatorsOn the convergence analysis of the optimized gradient methodAnalysis of biased stochastic gradient descent using sequential semidefinite programsOptimizing the efficiency of first-order methods for decreasing the gradient of smooth convex functionsAnalysis of Optimization Algorithms via Integral Quadratic Constraints: Nonstrongly Convex ProblemsBounds for the tracking error of first-order online optimization methodsData-Driven Nonsmooth OptimizationAnalysis of optimization algorithms via sum-of-squaresNew analysis of linear convergence of gradient-type methods via unifying error bound conditionsOn the oracle complexity of smooth strongly convex minimizationFinding the forward-Douglas-Rachford-forward methodSolving inverse problems using data-driven modelsExact Worst-Case Performance of First-Order Methods for Composite Convex Optimization


Uses Software


Cites Work


This page was built for publication: Smooth strongly convex interpolation and exact worst-case performance of first-order methods