An optimal algorithm for stochastic strongly-convex optimization

From MaRDI portal
Publication:2934088

zbMath1319.90050arXiv1006.2425MaRDI QIDQ2934088

Satyen Kale, Elad Hazan

Publication date: 8 December 2014

Full work available at URL: https://arxiv.org/abs/1006.2425



Related Items

Gradient-free two-point methods for solving stochastic nonsmooth convex optimization problems with small non-random noises, Stochastic forward-backward splitting for monotone inclusions, Nonparametric stochastic approximation with large step-sizes, Optimal distributed stochastic mirror descent for strongly convex optimization, Improving kernel online learning with a snapshot memory, Logarithmic regret in online linear quadratic control using Riccati updates, Perturbed Iterate Analysis for Asynchronous Stochastic Optimization, Online Covariance Matrix Estimation in Stochastic Gradient Descent, Unnamed Item, Incremental Majorization-Minimization Optimization with Application to Large-Scale Machine Learning, Relaxed-inertial proximal point type algorithms for quasiconvex minimization, Bregman proximal point type algorithms for quasiconvex minimization, On the Adaptivity of Stochastic Gradient-Based Optimization, Technical Note—Nonstationary Stochastic Optimization Under Lp,q-Variation Measures, On variance reduction for stochastic smooth convex optimization with multiplicative noise, Unnamed Item, Unnamed Item, Minimizing finite sums with the stochastic average gradient, New nonasymptotic convergence rates of stochastic proximal point algorithm for stochastic convex optimization, RSG: Beating Subgradient Method without Smoothness and Strong Convexity, Convergence of stochastic proximal gradient algorithm, Unnamed Item, A modular analysis of adaptive (non-)convex optimization: optimism, composite objectives, variance reduction, and variational bounds, Analogues of Switching Subgradient Schemes for Relatively Lipschitz-Continuous Convex Programming Problems, Convergence Rates for Deterministic and Stochastic Subgradient Methods without Lipschitz Continuity, Exploiting problem structure in optimization under uncertainty via online convex optimization, Making the Last Iterate of SGD Information Theoretically Optimal, On strongly quasiconvex functions: existence results and proximal point algorithms