Performance of first-order methods for smooth convex minimization: a novel approach
DOI10.1007/S10107-013-0653-0zbMATH Open1300.90068arXiv1206.3209OpenAlexW1979896658MaRDI QIDQ2248759FDOQ2248759
Authors: Yoel Drori, Marc Teboulle
Publication date: 27 June 2014
Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1206.3209
Recommendations
- Smooth strongly convex interpolation and exact worst-case performance of first-order methods
- Optimizing first-order methods for smooth convex minimization of gradient Q-linearly convergence
- Optimizing the efficiency of first-order methods for decreasing the gradient of smooth convex functions
- Optimized first-order methods for smooth convex minimization
- Generalizing the optimized gradient method for smooth convex minimization
rate of convergencecomplexitydualitysemidefinite relaxationssmooth convex minimizationheavy ball methodfast gradient schemesperformance of first-order algorithms
Quadratic programming (90C20) Convex programming (90C25) Analysis of algorithms and problem complexity (68Q25) Semidefinite programming (90C22) Abstract computational complexity for mathematical programming problems (90C60) Discrete approximations in optimal control (49M25)
Cites Work
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- Title not available (Why is that?)
- Variational Analysis
- Introductory lectures on convex optimization. A basic course.
- Title not available (Why is that?)
- Lectures on modern convex optimization. Analysis, algorithms, and engineering applications
- Title not available (Why is that?)
- Semidefinite Programming
- Graph implementations for nonsmooth convex programs
- Proximité et dualité dans un espace hilbertien
- An Interior-Point Method for Semidefinite Programming
- Fine tuning Nesterov's steepest descent algorithm for differentiable convex programming
- Primal-dual first-order methods with \({\mathcal {O}(1/\varepsilon)}\) iteration-complexity for cone programming
- Gradient-based algorithms with applications to signal-recovery problems
- Convex Optimization in Signal Processing and Communications
- Improved algorithms for convex minimization in relative scale
- THE HEAVY BALL WITH FRICTION METHOD, I. THE CONTINUOUS DYNAMICAL SYSTEM: GLOBAL EXPLORATION OF THE LOCAL MINIMA OF A REAL-VALUED FUNCTION BY ASYMPTOTIC ANALYSIS OF A DISSIPATIVE DYNAMICAL SYSTEM
- Some methods of speeding up the convergence of iteration methods
- Quadratic Matrix Programming
- Optimizing properties of an inertial dynamical system with geometric damping. Link with proximal methods
Cited In (82)
- Robust and structure exploiting optimisation algorithms: an integral quadratic constraint approach
- An optimal high-order tensor method for convex optimization
- Accelerated proximal algorithms with a correction term for monotone inclusions
- Analysis and design of optimization algorithms via integral quadratic constraints
- Conditions for linear convergence of the gradient method for non-convex optimization
- Tight sublinear convergence rate of the proximal point algorithm for maximal monotone inclusion problems
- Optimal deterministic algorithm generation
- Fast convergence of generalized forward-backward algorithms for structured monotone inclusions
- Data-driven nonsmooth optimization
- A Systematic Approach to Lyapunov Analyses of Continuous-Time Models in Convex Optimization
- Inertial proximal alternating linearized minimization (iPALM) for nonconvex and nonsmooth problems
- Another look at the fast iterative shrinkage/thresholding algorithm (FISTA)
- Generalizing the optimized gradient method for smooth convex minimization
- Operator splitting performance estimation: tight contraction factors and optimal parameter selection
- Title not available (Why is that?)
- Optimal error bounds for non-expansive fixed-point iterations in normed spaces
- Exact worst-case performance of first-order methods for composite convex optimization
- Proximal Splitting Algorithms for Convex Optimization: A Tour of Recent Advances, with New Twists
- Accelerated additive Schwarz methods for convex optimization with adaptive restart
- On the oracle complexity of first-order and derivative-free algorithms for smooth nonconvex minimization
- Backward-forward-reflected-backward splitting for three operator monotone inclusions
- Convergence rate of a relaxed inertial proximal algorithm for convex minimization
- Bounds for the tracking error of first-order online optimization methods
- Finding the forward-Douglas-Rachford-forward method
- Optimized first-order methods for smooth convex minimization
- Efficient first-order methods for convex minimization: a constructive approach
- A frequency-domain analysis of inexact gradient methods
- Optimal complexity and certification of Bregman first-order methods
- Smooth strongly convex interpolation and exact worst-case performance of first-order methods
- Synthesis of accelerated gradient algorithms for optimization and saddle point problems using Lyapunov functions and LMIs
- A stochastic subspace approach to gradient-free optimization in high dimensions
- New analysis of linear convergence of gradient-type methods via unifying error bound conditions
- Performance of first- and second-order methods for \(\ell_1\)-regularized least squares problems
- Analysis of optimization algorithms via integral quadratic constraints: nonstrongly convex problems
- Worst-case convergence analysis of inexact gradient and Newton methods through semidefinite programming performance estimation
- Adaptive restart of the optimized gradient method for convex optimization
- Exact worst-case convergence rates of the proximal gradient method for composite convex minimization
- Analysis of optimization algorithms via sum-of-squares
- On the worst-case complexity of the gradient method with exact line search for smooth strongly convex functions
- Optimizing the efficiency of first-order methods for the distance to an optimal solution of smooth convex functions
- Optimizing the efficiency of first-order methods for decreasing the gradient of smooth convex functions
- An optimal variant of Kelley's cutting-plane method
- The exact worst-case convergence rate of the gradient method with fixed step lengths for \(L\)-smooth functions
- iPiasco: inertial proximal algorithm for strongly convex optimization
- An optimal gradient method for smooth strongly convex minimization
- Accelerated methods for saddle-point problem
- Gradient descent technology for sparse vector learning in ontology algorithms
- The exact information-based complexity of smooth convex minimization
- On the convergence analysis of the optimized gradient method
- Subsampled Hessian Newton Methods for Supervised Learning
- Fast proximal algorithms for nonsmooth convex optimization
- Fast gradient methods for uniformly convex and weakly smooth problems
- Solving inverse problems using data-driven models
- Regularized nonlinear acceleration
- Rate of convergence of the Nesterov accelerated gradient method in the subcritical case α ≤ 3
- Title not available (Why is that?)
- Rate of convergence of inertial gradient dynamics with time-dependent viscous damping coefficient
- New computational guarantees for solving convex optimization problems with first order methods, via a function growth condition measure
- Cyclic schemes for PDE-based image analysis
- Analysis of biased stochastic gradient descent using sequential semidefinite programs
- Accelerated proximal point method for maximally monotone operators
- Multiply accelerated value iteration for nonsymmetric affine fixed point problems and application to Markov decision processes
- Potential Function-Based Framework for Minimizing Gradients in Convex and Min-Max Optimization
- An elementary approach to tight worst case complexity analysis of gradient based methods
- Provably faster gradient descent via long steps
- On the rate of convergence of the difference-of-convex algorithm (DCA)
- Tight ergodic sublinear convergence rate of the relaxed proximal point algorithm for monotone variational inequalities
- The exact worst-case convergence rate of the alternating direction method of multipliers
- Conic linear optimization for computer-assisted proofs. Abstracts from the workshop held April 10--16, 2022
- Interpolation conditions for linear operators and applications to performance estimation problems
- PEPIT: computer-assisted worst-case analyses of first-order optimization methods in python
- Two new splitting methods for three-operator monotone inclusions in Hilbert spaces
- Factor-\(\sqrt{2}\) acceleration of accelerated gradient methods
- Optimizing first-order methods for smooth convex minimization of gradient Q-linearly convergence
- Convergence rate analysis of the gradient descent–ascent method for convex–concave saddle-point problems
- Accelerated forward–backward algorithms for structured monotone inclusions
- Automated tight Lyapunov analysis for first-order methods
- Several kinds of acceleration techniques for unconstrained optimization first-order algorithms
- Generalized proximal point algorithms with correction terms and extrapolation
- Branch-and-bound performance estimation programming: a unified methodology for constructing optimal optimization methods
- Optimal step length for the maximal decrease of a self-concordant function by the Newton method
- Principled analyses and design of first-order methods with inexact proximal operators
Uses Software
This page was built for publication: Performance of first-order methods for smooth convex minimization: a novel approach
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2248759)