General convergence analysis of stochastic first-order methods for composite optimization
From MaRDI portal
Publication:2032020
DOI10.1007/s10957-021-01821-2zbMath1471.65055arXiv2003.01666OpenAlexW3130318025MaRDI QIDQ2032020
Publication date: 15 June 2021
Published in: Journal of Optimization Theory and Applications (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2003.01666
convergence ratesquadratic functional growthstochastic bounded gradientstochastic composite convex optimizationstochastic first-order algorithms
Numerical mathematical programming methods (65K05) Convex programming (90C25) Stochastic programming (90C15)
Related Items (1)
Cites Work
- Unnamed Item
- Unnamed Item
- Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers
- First-order methods of smooth convex optimization with inexact oracle
- An optimal method for stochastic composite optimization
- The solution path of the generalized lasso
- Introductory lectures on convex optimization. A basic course.
- Convergence of stochastic proximal gradient algorithm
- Random minibatch subgradient algorithms for convex problems with functional constraints
- Linear convergence of first order methods for non-strongly convex optimization
- Weak Sharp Minima in Mathematical Programming
- Robust Stochastic Approximation Approach to Stochastic Programming
- RSG: Beating Subgradient Method without Smoothness and Strong Convexity
- Nonasymptotic convergence of stochastic proximal point algorithms for constrained convex optimization
- Energy-based sensor network source localization via projection onto convex sets
- Optimization Methods for Large-Scale Machine Learning
- Computational Complexity of Inexact Gradient Augmented Lagrangian Methods: Application to Constrained MPC
- Randomized Projection Methods for Convex Feasibility: Conditioning and Convergence Rates
- A Proximal Stochastic Gradient Method with Progressive Variance Reduction
- On perturbed proximal gradient algorithms
- Minimization of unsmooth functionals
- Convex analysis and monotone operator theory in Hilbert spaces
This page was built for publication: General convergence analysis of stochastic first-order methods for composite optimization