PESTO
From MaRDI portal
Software:32678
swMATH20864MaRDI QIDQ32678FDOQ32678
Author name not available (Why is that?)
Source code repository: https://github.com/AdrienTaylor/Performance-Estimation-Toolbox
Cited In (41)
- Exact Worst-Case Performance of First-Order Methods for Composite Convex Optimization
- Analysis of Optimization Algorithms via Integral Quadratic Constraints: Nonstrongly Convex Problems
- Worst-Case Convergence Analysis of Inexact Gradient and Newton Methods Through Semidefinite Programming Performance Estimation
- On the oracle complexity of smooth strongly convex minimization
- Title not available (Why is that?)
- Backward-forward-reflected-backward splitting for three operator monotone inclusions
- Bounds for the tracking error of first-order online optimization methods
- Finding the forward-Douglas-Rachford-forward method
- Parallel random block-coordinate forward-backward algorithm: a unified convergence analysis
- Optimized first-order methods for smooth convex minimization
- Efficient first-order methods for convex minimization: a constructive approach
- Tight Sublinear Convergence Rate of the Proximal Point Algorithm for Maximal Monotone Inclusion Problems
- A frequency-domain analysis of inexact gradient methods
- Optimal complexity and certification of Bregman first-order methods
- Scaled relative graphs: nonexpansive operators via 2D Euclidean geometry
- Smooth strongly convex interpolation and exact worst-case performance of first-order methods
- New analysis of linear convergence of gradient-type methods via unifying error bound conditions
- Exact worst-case convergence rates of the proximal gradient method for composite convex minimization
- Analysis of optimization algorithms via sum-of-squares
- On the worst-case complexity of the gradient method with exact line search for smooth strongly convex functions
- Optimizing the efficiency of first-order methods for decreasing the gradient of smooth convex functions
- Title not available (Why is that?)
- The exact worst-case convergence rate of the gradient method with fixed step lengths for \(L\)-smooth functions
- Analysis of the gradient method with an Armijo–Wolfe line search on a class of non-smooth convex functions
- Optimal step length for the Newton method: case of self-concordant functions
- Accelerated methods for saddle-point problem
- On the Properties of Convex Functions over Open Sets
- Fast gradient descent for convex minimization problems with an oracle producing a \(( \delta, L)\)-model of function at the requested point
- Generalizing the Optimized Gradient Method for Smooth Convex Minimization
- Another Look at the Fast Iterative Shrinkage/Thresholding Algorithm (FISTA)
- Surrogate-based distributed optimisation for expensive black-box functions
- A generic online acceleration scheme for optimization algorithms via relaxation and inertia
- Data-Driven Nonsmooth Optimization
- The exact information-based complexity of smooth convex minimization
- On the convergence analysis of the optimized gradient method
- Solving inverse problems using data-driven models
- Halting time is predictable for large models: a universality property and average-case analysis
- On the convergence rate of the Halpern-iteration
- Finitely determined functions
- Operator Splitting Performance Estimation: Tight Contraction Factors and Optimal Parameter Selection
- Analysis of biased stochastic gradient descent using sequential semidefinite programs
This page was built for software: PESTO