Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization, II: Shrinking Procedures and Optimal Algorithms
From MaRDI portal
Publication:5408211
DOI10.1137/110848876zbMath1293.62167OpenAlexW2168909589MaRDI QIDQ5408211
Publication date: 9 April 2014
Published in: SIAM Journal on Optimization (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1137/110848876
Analysis of algorithms and problem complexity (68Q25) Convex programming (90C25) Stochastic programming (90C15) Stochastic approximation (62L20)
Related Items (51)
Stochastic forward-backward splitting for monotone inclusions ⋮ Stochastic approximation method using diagonal positive-definite matrices for convex optimization with fixed point constraints ⋮ Stochastic intermediate gradient method for convex optimization problems ⋮ Gradient sliding for composite optimization ⋮ New results on subgradient methods for strongly convex optimization problems with a unified analysis ⋮ A stochastic subgradient method for distributionally robust non-convex and non-smooth learning ⋮ Unnamed Item ⋮ On the information-adaptive variants of the ADMM: an iteration complexity perspective ⋮ Accelerated Stochastic Algorithms for Convex-Concave Saddle-Point Problems ⋮ Block Stochastic Gradient Iteration for Convex and Nonconvex Optimization ⋮ Algorithms for stochastic optimization with function or expectation constraints ⋮ A dual-based stochastic inexact algorithm for a class of stochastic nonsmooth convex composite problems ⋮ Stochastic Block Mirror Descent Methods for Nonsmooth and Stochastic Optimization ⋮ Graph Topology Invariant Gradient and Sampling Complexity for Decentralized and Stochastic Optimization ⋮ Unnamed Item ⋮ Inexact proximal stochastic gradient method for convex composite optimization ⋮ Optimal Algorithms for Stochastic Complementary Composite Minimization ⋮ Theoretical analysis of Adam using hyperparameters close to one without Lipschitz smoothness ⋮ Nonlinear Gradient Mappings and Stochastic Optimization: A General Framework with Applications to Heavy-Tail Noise ⋮ Block mirror stochastic gradient method for stochastic optimization ⋮ First-order methods for convex optimization ⋮ Penalty methods with stochastic approximation for stochastic nonlinear programming ⋮ Inexact SA method for constrained stochastic convex SDP and application in Chinese stock market ⋮ A sparsity preserving stochastic gradient methods for sparse regression ⋮ Two stochastic optimization algorithms for convex optimization with fixed point constraints ⋮ Conditional gradient type methods for composite nonlinear and stochastic optimization ⋮ On variance reduction for stochastic smooth convex optimization with multiplicative noise ⋮ Stochastic compositional gradient descent: algorithms for minimizing compositions of expected-value functions ⋮ Minimizing finite sums with the stochastic average gradient ⋮ RSG: Beating Subgradient Method without Smoothness and Strong Convexity ⋮ Convergence of stochastic proximal gradient algorithm ⋮ Dynamic stochastic approximation for multi-stage stochastic optimization ⋮ An optimal randomized incremental gradient method ⋮ Random Gradient Extrapolation for Distributed and Stochastic Optimization ⋮ Optimal stochastic extragradient schemes for pseudomonotone stochastic variational inequality problems and their variants ⋮ Stochastic intermediate gradient method for convex problems with stochastic inexact oracle ⋮ Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization ⋮ Bundle-level type methods uniformly optimal for smooth and nonsmooth convex optimization ⋮ Inexact stochastic subgradient projection method for stochastic equilibrium problems with nonmonotone bifunctions: application to expected risk minimization in machine learning ⋮ Communication-efficient algorithms for decentralized and stochastic optimization ⋮ Restarting the accelerated coordinate descent method with a rough strong convexity estimate ⋮ Conditional Gradient Sliding for Convex Optimization ⋮ A family of subgradient-based methods for convex optimization problems in a unifying framework ⋮ Generalized uniformly optimal methods for nonlinear programming ⋮ On the Solution of Stochastic Optimization and Variational Problems in Imperfect Information Regimes ⋮ Accelerate stochastic subgradient method by leveraging local growth condition ⋮ Computing the Best Approximation over the Intersection of a Polyhedral Set and the Doubly Nonnegative Cone ⋮ Unnamed Item ⋮ Robust Accelerated Gradient Methods for Smooth Strongly Convex Functions ⋮ Smoothed Variable Sample-Size Accelerated Proximal Methods for Nonsmooth Stochastic Convex Programs ⋮ Accelerated gradient methods for nonconvex nonlinear and stochastic programming
This page was built for publication: Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization, II: Shrinking Procedures and Optimal Algorithms