On variance reduction for stochastic smooth convex optimization with multiplicative noise
DOI10.1007/s10107-018-1297-xzbMath1411.65082arXiv1705.02969OpenAlexW2962930655MaRDI QIDQ1739038
Publication date: 24 April 2019
Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1705.02969
complexitystochastic approximationaccelerationvariance reductionmultiplicative noisesmooth convex optimizationcomposite optimizationdynamic sampling
Numerical mathematical programming methods (65K05) Convex programming (90C25) Stochastic programming (90C15) Stochastic approximation (62L20) Complexity and performance of numerical algorithms (65Y20)
Related Items (13)
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Primal-dual subgradient methods for convex problems
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- Accelerated gradient methods for nonconvex nonlinear and stochastic programming
- Gradient methods for minimizing composite functions
- An optimal method for stochastic composite optimization
- A sparsity preserving stochastic gradient methods for sparse regression
- Handbook of simulation optimization
- Minimizing finite sums with the stochastic average gradient
- Sample size selection in optimization methods for machine learning
- Learning by mirror averaging
- New method of stochastic approximation type
- Introductory lectures on convex optimization. A basic course.
- Convergence of stochastic proximal gradient algorithm
- Recursive aggregation of estimators by the mirror descent algorithm with averaging
- Confidence level solutions for stochastic programming
- A short note on parameter approximation for von Mises-Fisher distributions: and a fast implementation of \(I_{s}(x)\)
- Optimal non-asymptotic analysis of the Ruppert-Polyak averaging stochastic algorithm
- Hybrid Deterministic-Stochastic Methods for Data Fitting
- Adaptivity of averaged stochastic gradient descent to local strong convexity for logistic regression
- An optimal algorithm for stochastic strongly-convex optimization
- Lectures on Stochastic Programming
- Robust Stochastic Approximation Approach to Stochastic Programming
- Acceleration of Stochastic Approximation by Averaging
- Catalyst Acceleration for First-order Convex Optimization: from Theory to Practice
- Variance-Based Extragradient Methods with Line Search for Stochastic Variational Inequalities
- Harder, Better, Faster, Stronger Convergence Rates for Least-Squares Regression
- Optimization Methods for Large-Scale Machine Learning
- Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization I: A Generic Algorithmic Framework
- Stochastic Approximation Approaches to the Stochastic Variational Inequality Problem
- Katyusha: the first direct acceleration of stochastic gradient methods
- Solving variational inequalities with Stochastic Mirror-Prox algorithm
- Incremental Constraint Projection Methods for Monotone Stochastic Variational Inequalities
- A Proximal Stochastic Gradient Method with Progressive Variance Reduction
- On the Convergence Rate of Incremental Aggregated Gradient Algorithms
- Information-Theoretic Lower Bounds on the Oracle Complexity of Stochastic Convex Optimization
- On perturbed proximal gradient algorithms
- Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization, II: Shrinking Procedures and Optimal Algorithms
- Extragradient Method with Variance Reduction for Stochastic Variational Inequalities
- A Stochastic Approximation Method
- On a Stochastic Approximation Method
- Stochastic gradient descent, weighted sampling, and the randomized Kaczmarz algorithm
This page was built for publication: On variance reduction for stochastic smooth convex optimization with multiplicative noise