Stochastic compositional gradient descent: algorithms for minimizing compositions of expected-value functions
Publication:507334
DOI10.1007/s10107-016-1017-3zbMath1356.90099arXiv1411.3803OpenAlexW1494085563MaRDI QIDQ507334
Mengdi Wang, Ethan X. Fang, Han Liu
Publication date: 3 February 2017
Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1411.3803
stochastic optimizationconvex optimizationsimulationstatistical learningstochastic gradientsample complexity
Convex programming (90C25) Large-scale problems in mathematical programming (90C06) Stochastic programming (90C15) Online algorithms; streaming algorithms (68W27)
Related Items (34)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Random algorithms for convex minimization problems
- Incremental proximal methods for large scale convex optimization
- High-dimensional analysis of semidefinite relaxations for sparse principal components
- Variable selection in nonparametric additive models
- Stochastic approximation. A dynamical systems viewpoint.
- Stochastic approximation with two time scales
- Convergence rate of linear two-time-scale stochastic approximation.
- Incremental constraint projection methods for variational inequalities
- Statistical estimation of composite risk functionals and risk optimization problems
- Incremental Subgradient Methods for Nondifferentiable Optimization
- On Upper Functions for Stochastic Approximation Procedures
- Model selection and estimation in the Gaussian graphical model
- Lectures on Stochastic Programming
- Acceleration of Stochastic Approximation by Averaging
- Sparse Additive Models
- Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization I: A Generic Algorithmic Framework
- A two Timescale Stochastic Approximation Scheme for Simulation-Based Parametric Optimization
- Information-Theoretic Lower Bounds on the Oracle Complexity of Stochastic Convex Optimization
- Optimal Stochastic Approximation Algorithms for Strongly Convex Stochastic Composite Optimization, II: Shrinking Procedures and Optimal Algorithms
- Stochastic Estimation of the Maximum of a Regression Function
This page was built for publication: Stochastic compositional gradient descent: algorithms for minimizing compositions of expected-value functions