Smooth Optimization with Approximate Gradient

From MaRDI portal
Revision as of 16:58, 4 February 2024 by Import240129110113 (talk | contribs) (Created automatically from import240129110113)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Publication:3395010

DOI10.1137/060676386zbMath1180.90378arXivmath/0512344OpenAlexW2121210949MaRDI QIDQ3395010

Alexandre d'Aspremont

Publication date: 20 August 2009

Published in: SIAM Journal on Optimization (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/math/0512344




Related Items (38)

Lower Bounds for Parallel and Randomized Convex OptimizationSequential Subspace Optimization for Quasar-Convex Optimization Problems with Inexact GradientAccelerated gradient sliding for structured convex optimizationA frequency-domain analysis of inexact gradient methodsGeneralized mirror prox algorithm for monotone variational inequalities: Universality and inexact oracleRobust hybrid zero-order optimization algorithms with acceleration via averaging in timeStatistical Query Algorithms for Mean Vector Estimation and Stochastic Convex OptimizationInexact first-order primal-dual algorithmsSemi-discrete optimal transport: hardness, regularization and numerical solutionAccelerated differential inclusion for convex optimizationStopping rules for gradient methods for non-convex problems with additive noise in gradientFirst-order methods of smooth convex optimization with inexact oracleA dual gradient-projection algorithm for model predictive control in fixed-point arithmeticAccelerated gradient methods with absolute and relative noise in the gradientGradient-Type Methods for Optimization Problems with Polyak-Łojasiewicz Condition: Early Stopping and Adaptivity to Inexactness ParameterPrimal-dual first-order methods with \({\mathcal {O}(1/\varepsilon)}\) iteration-complexity for cone programmingFirst-order methods for convex optimizationDifferentially private inference via noisy optimizationUnifying framework for accelerated randomized methods in convex optimizationWorst-Case Convergence Analysis of Inexact Gradient and Newton Methods Through Semidefinite Programming Performance EstimationAn optimal method for stochastic composite optimizationAutomatic alignment for three-dimensional tomographic reconstructionHT-AWGM: a hierarchical Tucker-adaptive wavelet Galerkin method for high-dimensional elliptic problemsA generalization of Löwner-John's ellipsoid theoremComposite convex optimization with global and local inexact oraclesAn introduction to continuous optimization for imagingAn acceleration procedure for optimal first-order methodsAnalysis of biased stochastic gradient descent using sequential semidefinite programsAccelerated proximal stochastic dual coordinate ascent for regularized loss minimizationNew analysis and results for the Frank-Wolfe methodDecentralized and parallel primal and dual accelerated methods for stochastic convex programming problemsHarder, Better, Faster, Stronger Convergence Rates for Least-Squares RegressionComputing the Best Approximation over the Intersection of a Polyhedral Set and the Doubly Nonnegative ConeAn Accelerated Linearized Alternating Direction Method of MultipliersRobust Accelerated Gradient Methods for Smooth Strongly Convex FunctionsUniversal intermediate gradient method for convex problems with inexact oracleAn inexact dual fast gradient-projection method for separable convex optimization with linear coupled constraintsExact Worst-Case Performance of First-Order Methods for Composite Convex Optimization


Uses Software






This page was built for publication: Smooth Optimization with Approximate Gradient