PESTO

From MaRDI portal
Software:32678



swMATH20864MaRDI QIDQ32678


No author found.

Source code repository: https://github.com/AdrienTaylor/Performance-Estimation-Toolbox




Related Items (41)

On the convergence rate of the Halpern-iterationParallel random block-coordinate forward-backward algorithm: a unified convergence analysisOptimized first-order methods for smooth convex minimizationOptimal complexity and certification of Bregman first-order methodsScaled relative graphs: nonexpansive operators via 2D Euclidean geometryA frequency-domain analysis of inexact gradient methodsOn the Properties of Convex Functions over Open SetsFinitely determined functionsGeneralizing the Optimized Gradient Method for Smooth Convex MinimizationExact worst-case convergence rates of the proximal gradient method for composite convex minimizationSurrogate-based distributed optimisation for expensive black-box functionsOn the worst-case complexity of the gradient method with exact line search for smooth strongly convex functionsThe exact worst-case convergence rate of the gradient method with fixed step lengths for \(L\)-smooth functionsBackward-forward-reflected-backward splitting for three operator monotone inclusionsUnnamed ItemUnnamed ItemAnother Look at the Fast Iterative Shrinkage/Thresholding Algorithm (FISTA)Worst-Case Convergence Analysis of Inexact Gradient and Newton Methods Through Semidefinite Programming Performance EstimationEfficient first-order methods for convex minimization: a constructive approachHalting time is predictable for large models: a universality property and average-case analysisOperator Splitting Performance Estimation: Tight Contraction Factors and Optimal Parameter SelectionAccelerated methods for saddle-point problemA generic online acceleration scheme for optimization algorithms via relaxation and inertiaTight Sublinear Convergence Rate of the Proximal Point Algorithm for Maximal Monotone Inclusion ProblemsSmooth strongly convex interpolation and exact worst-case performance of first-order methodsThe exact information-based complexity of smooth convex minimizationOn the convergence analysis of the optimized gradient methodAnalysis of biased stochastic gradient descent using sequential semidefinite programsFast gradient descent for convex minimization problems with an oracle producing a \(( \delta, L)\)-model of function at the requested pointOptimizing the efficiency of first-order methods for decreasing the gradient of smooth convex functionsAnalysis of Optimization Algorithms via Integral Quadratic Constraints: Nonstrongly Convex ProblemsBounds for the tracking error of first-order online optimization methodsData-Driven Nonsmooth OptimizationAnalysis of the gradient method with an Armijo–Wolfe line search on a class of non-smooth convex functionsAnalysis of optimization algorithms via sum-of-squaresNew analysis of linear convergence of gradient-type methods via unifying error bound conditionsOn the oracle complexity of smooth strongly convex minimizationFinding the forward-Douglas-Rachford-forward methodSolving inverse problems using data-driven modelsOptimal step length for the Newton method: case of self-concordant functionsExact Worst-Case Performance of First-Order Methods for Composite Convex Optimization


This page was built for software: PESTO