An optimal algorithm for stochastic strongly-convex optimization

From MaRDI portal
Revision as of 20:15, 3 February 2024 by Import240129110113 (talk | contribs) (Created automatically from import240129110113)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Publication:2934088

zbMath1319.90050arXiv1006.2425MaRDI QIDQ2934088

Satyen Kale, Elad Hazan

Publication date: 8 December 2014

Full work available at URL: https://arxiv.org/abs/1006.2425




Related Items (28)

Gradient-free two-point methods for solving stochastic nonsmooth convex optimization problems with small non-random noisesStochastic forward-backward splitting for monotone inclusionsNonparametric stochastic approximation with large step-sizesOptimal distributed stochastic mirror descent for strongly convex optimizationImproving kernel online learning with a snapshot memoryLogarithmic regret in online linear quadratic control using Riccati updatesPerturbed Iterate Analysis for Asynchronous Stochastic OptimizationOnline Covariance Matrix Estimation in Stochastic Gradient DescentUnnamed ItemIncremental Majorization-Minimization Optimization with Application to Large-Scale Machine LearningRelaxed-inertial proximal point type algorithms for quasiconvex minimizationBregman proximal point type algorithms for quasiconvex minimizationOn the Adaptivity of Stochastic Gradient-Based OptimizationTechnical Note—Nonstationary Stochastic Optimization Under Lp,q-Variation MeasuresOn variance reduction for stochastic smooth convex optimization with multiplicative noiseUnnamed ItemUnnamed ItemMinimizing finite sums with the stochastic average gradientNew nonasymptotic convergence rates of stochastic proximal point algorithm for stochastic convex optimizationRSG: Beating Subgradient Method without Smoothness and Strong ConvexityConvergence of stochastic proximal gradient algorithmUnnamed ItemA modular analysis of adaptive (non-)convex optimization: optimism, composite objectives, variance reduction, and variational boundsAnalogues of Switching Subgradient Schemes for Relatively Lipschitz-Continuous Convex Programming ProblemsConvergence Rates for Deterministic and Stochastic Subgradient Methods without Lipschitz ContinuityExploiting problem structure in optimization under uncertainty via online convex optimizationMaking the Last Iterate of SGD Information Theoretically OptimalOn strongly quasiconvex functions: existence results and proximal point algorithms







This page was built for publication: An optimal algorithm for stochastic strongly-convex optimization