Stochastic nested variance reduction for nonconvex optimization
From MaRDI portal
Publication:4969167
Pan Xu, Dongruo Zhou, Quanquan Gu
Publication date: 5 October 2020
Full work available at URL: https://arxiv.org/abs/1806.07811
Recommendations
- Stochastic variance-reduced cubic regularization methods
- Variable metric proximal stochastic variance reduced gradient methods for nonconvex nonsmooth optimization
- Stochastic Quasi-Newton Methods for Nonconvex Stochastic Optimization
- Lower bounds for non-convex stochastic optimization
- Momentum-based variance-reduced proximal stochastic gradient method for composite nonconvex stochastic optimization
Numerical optimization and variational techniques (65K10) Problem solving in the context of artificial intelligence (heuristics, search strategies, etc.) (68T20) Nonconvex programming, global optimization (90C26)
Cites Work
- A Stochastic Approximation Method
- Most Tensor Problems Are NP-Hard
- Accelerated gradient methods for nonconvex nonlinear and stochastic programming
- Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming
- Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization
- A Proximal Stochastic Gradient Method with Progressive Variance Reduction
- Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization
- A simplified neuron model as a principal component analyzer
- Cubic regularization of Newton method and its global performance
- Affine conjugate adaptive Newton methods for nonlinear elastomechanics
- A trust region algorithm with a worst-case iteration complexity of \(\mathcal{O}(\epsilon ^{-3/2})\) for nonconvex optimization
- Accelerated Methods for NonConvex Optimization
- Finding approximate local minima faster than gradient descent
- Katyusha: the first direct acceleration of stochastic gradient methods
- First-order methods almost always avoid strict saddle points
- Title not available (Why is that?)
Cited In (17)
- Accelerated doubly stochastic gradient descent for tensor CP decomposition
- Stochastic Gauss-Newton algorithm with STORM estimators for nonconvex composite optimization
- On variance reduction for stochastic smooth convex optimization with multiplicative noise
- Title not available (Why is that?)
- Fast Decentralized Nonconvex Finite-Sum Optimization with Recursive Variance Reduction
- Variance reduction on general adaptive stochastic mirror descent
- Stochastic variable metric proximal gradient with variance reduction for non-convex composite optimization
- Accelerated stochastic variance reduction for a class of convex optimization problems
- A Single Timescale Stochastic Approximation Method for Nested Stochastic Optimization
- Lower bounds for non-convex stochastic optimization
- Momentum-based variance-reduced proximal stochastic gradient method for composite nonconvex stochastic optimization
- Title not available (Why is that?)
- Stochastic perturbation of reduced gradient \& GRG methods for nonconvex programming problems
- A stochastic extra-step quasi-Newton method for nonsmooth nonconvex optimization
- Nonconvex stochastic optimization for model reduction
- A linearly convergent stochastic recursive gradient method for convex optimization
- Stochastic Nested Variance Reduction for Nonconvex Optimization
Uses Software
This page was built for publication: Stochastic nested variance reduction for nonconvex optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4969167)