Worst-case convergence analysis of inexact gradient and Newton methods through semidefinite programming performance estimation
DOI10.1137/19M1281368zbMATH Open1448.90070arXiv1709.05191MaRDI QIDQ5116548FDOQ5116548
Authors: E. de Klerk, François Glineur, Adrien B. Taylor
Publication date: 18 August 2020
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1709.05191
Recommendations
- On the worst-case complexity of the gradient method with exact line search for smooth strongly convex functions
- Smooth strongly convex interpolation and exact worst-case performance of first-order methods
- Exact worst-case performance of first-order methods for composite convex optimization
- Principled analyses and design of first-order methods with inexact proximal operators
- Convergence analysis of an inexact infeasible interior point method for semidefinite programming
interior point methodssemidefinite programminggradient methodinexact search directionsperformance estimation problems
Nonconvex programming, global optimization (90C26) Semidefinite programming (90C22) Abstract computational complexity for mathematical programming problems (90C60)
Cites Work
- Title not available (Why is that?)
- Analysis and Design of Optimization Algorithms via Integral Quadratic Constraints
- Smooth Optimization with Approximate Gradient
- Quantitative estimates of the convergence of the empirical covariance matrix in log-concave ensembles
- First-order methods of smooth convex optimization with inexact oracle
- A mathematical view of interior-point methods in convex optimization
- Efficient Monte Carlo Procedures for Generating Points Uniformly Distributed over Bounded Regions
- A relationship between the second derivatives of a convex function and of its conjugate
- Performance of first-order methods for smooth convex minimization: a novel approach
- Optimized first-order methods for smooth convex minimization
- Smooth strongly convex interpolation and exact worst-case performance of first-order methods
- An extension theorem for convex functions of class \(C^{1,1}\) on Hilbert spaces
- An optimal variant of Kelley's cutting-plane method
- The geometry of logconcave functions and sampling algorithms
- Efficient first-order methods for convex minimization: a constructive approach
- Exact Worst-Case Performance of First-Order Methods for Composite Convex Optimization
- Simulated Annealing for Convex Optimization
- Inexact proximal Newton methods for self-concordant functions
- Lectures on convex optimization
- Convergence of methods of feasible directions in extremal problems
- Exact worst-case convergence rates of the proximal gradient method for composite convex minimization
- On the worst-case complexity of the gradient method with exact line search for smooth strongly convex functions
Cited In (17)
- Complexity Analysis of a Sampling-Based Interior Point Method for Convex Optimization
- Potential Function-Based Framework for Minimizing Gradients in Convex and Min-Max Optimization
- Conditions for linear convergence of the gradient method for non-convex optimization
- Provably faster gradient descent via long steps
- A Systematic Approach to Lyapunov Analyses of Continuous-Time Models in Convex Optimization
- On the rate of convergence of the difference-of-convex algorithm (DCA)
- Interpolation conditions for linear operators and applications to performance estimation problems
- A frequency-domain analysis of inexact gradient methods
- Factor-\(\sqrt{2}\) acceleration of accelerated gradient methods
- Analysis of optimization algorithms via sum-of-squares
- A note on the optimal convergence rate of descent methods with fixed step sizes for smooth strongly convex functions
- The exact worst-case convergence rate of the gradient method with fixed step lengths for \(L\)-smooth functions
- Optimal step length for the Newton method: case of self-concordant functions
- On the Properties of Convex Functions over Open Sets
- Branch-and-bound performance estimation programming: a unified methodology for constructing optimal optimization methods
- Optimal step length for the maximal decrease of a self-concordant function by the Newton method
- Principled analyses and design of first-order methods with inexact proximal operators
Uses Software
This page was built for publication: Worst-case convergence analysis of inexact gradient and Newton methods through semidefinite programming performance estimation
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5116548)