A hybrid stochastic optimization framework for composite nonconvex optimization
From MaRDI portal
Publication:2118109
DOI10.1007/s10107-020-01583-1zbMath1489.90143arXiv1907.03793OpenAlexW3119990369MaRDI QIDQ2118109
Dzung T. Phan, Lam M. Nguyen, Nhan H. Pham, Quoc Tran Dinh
Publication date: 22 March 2022
Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1907.03793
variance reductionoracle complexitystochastic optimization algorithmcomposite nonconvex optimizationhybrid stochastic estimator
Nonconvex programming, global optimization (90C26) Computational methods for problems pertaining to operations research and mathematical programming (90-08)
Related Items
Hybrid SGD algorithms to solve stochastic composite optimization problems with application in sparse portfolio selection problems, Stochastic momentum methods for non-convex learning without bounded assumptions, Momentum-based variance-reduced proximal stochastic gradient method for composite nonconvex stochastic optimization, Stochastic inexact augmented Lagrangian method for nonconvex expectation constrained optimization, Proximal stochastic recursive momentum algorithm for nonsmooth nonconvex optimization problems, Unnamed Item, Unnamed Item, Distributed Stochastic Inertial-Accelerated Methods with Delayed Derivatives for Nonconvex Problems
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- A Stochastic Quasi-Newton Method for Large-Scale Optimization
- Accelerated gradient methods for nonconvex nonlinear and stochastic programming
- Stochastic compositional gradient descent: algorithms for minimizing compositions of expected-value functions
- Minimizing finite sums with the stochastic average gradient
- Incremental proximal methods for large scale convex optimization
- A simplified neuron model as a principal component analyzer
- Introductory lectures on convex optimization. A basic course.
- On variance reduction for stochastic smooth convex optimization with multiplicative noise
- Sub-sampled Newton methods
- Lower bounds for finding stationary points I
- Cubic regularization of Newton method and its global performance
- Newton Sketch: A Near Linear-Time Optimization Algorithm with Linear-Quadratic Convergence
- Large-Scale Machine Learning with Stochastic Gradient Descent
- Robust Stochastic Approximation Approach to Stochastic Programming
- Acceleration of Stochastic Approximation by Averaging
- Stochastic Primal-Dual Hybrid Gradient Algorithm with Arbitrary Sampling and Imaging Applications
- Katyusha: the first direct acceleration of stochastic gradient methods
- Proximally Guided Stochastic Subgradient Method for Nonsmooth, Nonconvex Problems
- Incremental Majorization-Minimization Optimization with Application to Large-Scale Machine Learning
- Information-Theoretic Lower Bounds on the Oracle Complexity of Stochastic Convex Optimization
- Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization
- Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming
- A Stochastic Approximation Method
- Exact and inexact subsampled Newton methods for optimization
- Inexact SARAH algorithm for stochastic optimization
- Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization