Performance of first-order methods for smooth convex minimization: a novel approach

From MaRDI portal
Publication:2248759

DOI10.1007/s10107-013-0653-0zbMath1300.90068arXiv1206.3209OpenAlexW1979896658MaRDI QIDQ2248759

Yoel Drori, Marc Teboulle

Publication date: 27 June 2014

Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/1206.3209



Related Items

Accelerated proximal algorithms with a correction term for monotone inclusions, Optimized first-order methods for smooth convex minimization, Inertial Proximal Alternating Linearized Minimization (iPALM) for Nonconvex and Nonsmooth Problems, Optimal complexity and certification of Bregman first-order methods, A frequency-domain analysis of inexact gradient methods, Analysis and Design of Optimization Algorithms via Integral Quadratic Constraints, Synthesis of accelerated gradient algorithms for optimization and saddle point problems using Lyapunov functions and LMIs, Gradient descent technology for sparse vector learning in ontology algorithms, An optimal variant of Kelley's cutting-plane method, Generalizing the Optimized Gradient Method for Smooth Convex Minimization, Optimal deterministic algorithm generation, Adaptive restart of the optimized gradient method for convex optimization, Exact worst-case convergence rates of the proximal gradient method for composite convex minimization, Fast proximal algorithms for nonsmooth convex optimization, Accelerated additive Schwarz methods for convex optimization with adaptive restart, Potential Function-Based Framework for Minimizing Gradients in Convex and Min-Max Optimization, On the worst-case complexity of the gradient method with exact line search for smooth strongly convex functions, Fast gradient methods for uniformly convex and weakly smooth problems, The exact worst-case convergence rate of the gradient method with fixed step lengths for \(L\)-smooth functions, Backward-forward-reflected-backward splitting for three operator monotone inclusions, iPiasco: inertial proximal algorithm for strongly convex optimization, Optimal error bounds for non-expansive fixed-point iterations in normed spaces, An optimal gradient method for smooth strongly convex minimization, Proximal Splitting Algorithms for Convex Optimization: A Tour of Recent Advances, with New Twists, Factor-\(\sqrt{2}\) acceleration of accelerated gradient methods, Unnamed Item, Conditions for linear convergence of the gradient method for non-convex optimization, Rate of convergence of the Nesterov accelerated gradient method in the subcritical case α ≤ 3, A Systematic Approach to Lyapunov Analyses of Continuous-Time Models in Convex Optimization, Branch-and-bound performance estimation programming: a unified methodology for constructing optimal optimization methods, Optimal step length for the maximal decrease of a self-concordant function by the Newton method, Unnamed Item, Convergence rate of a relaxed inertial proximal algorithm for convex minimization, An elementary approach to tight worst case complexity analysis of gradient based methods, Principled analyses and design of first-order methods with inexact proximal operators, Conic linear optimization for computer-assisted proofs. Abstracts from the workshop held April 10--16, 2022, Another Look at the Fast Iterative Shrinkage/Thresholding Algorithm (FISTA), Worst-Case Convergence Analysis of Inexact Gradient and Newton Methods Through Semidefinite Programming Performance Estimation, Efficient first-order methods for convex minimization: a constructive approach, Operator Splitting Performance Estimation: Tight Contraction Factors and Optimal Parameter Selection, Rate of convergence of inertial gradient dynamics with time-dependent viscous damping coefficient, Accelerated methods for saddle-point problem, Subsampled Hessian Newton Methods for Supervised Learning, Tight Sublinear Convergence Rate of the Proximal Point Algorithm for Maximal Monotone Inclusion Problems, Accelerated proximal point method for maximally monotone operators, Smooth strongly convex interpolation and exact worst-case performance of first-order methods, The exact information-based complexity of smooth convex minimization, On the convergence analysis of the optimized gradient method, Cyclic schemes for PDE-based image analysis, Fast convergence of generalized forward-backward algorithms for structured monotone inclusions, Analysis of biased stochastic gradient descent using sequential semidefinite programs, Optimizing the efficiency of first-order methods for decreasing the gradient of smooth convex functions, Analysis of Optimization Algorithms via Integral Quadratic Constraints: Nonstrongly Convex Problems, Bounds for the tracking error of first-order online optimization methods, Regularized nonlinear acceleration, Data-Driven Nonsmooth Optimization, A stochastic subspace approach to gradient-free optimization in high dimensions, Analysis of optimization algorithms via sum-of-squares, New analysis of linear convergence of gradient-type methods via unifying error bound conditions, Finding the forward-Douglas-Rachford-forward method, Solving inverse problems using data-driven models, Robust and structure exploiting optimisation algorithms: an integral quadratic constraint approach, Multiply Accelerated Value Iteration for NonSymmetric Affine Fixed Point Problems and Application to Markov Decision Processes, An Optimal High-Order Tensor Method for Convex Optimization, Exact Worst-Case Performance of First-Order Methods for Composite Convex Optimization


Uses Software


Cites Work