Stochastic nested variance reduction for nonconvex optimization
From MaRDI portal
Publication:4969167
Recommendations
- Stochastic variance-reduced cubic regularization methods
- Variable metric proximal stochastic variance reduced gradient methods for nonconvex nonsmooth optimization
- Stochastic Quasi-Newton Methods for Nonconvex Stochastic Optimization
- Lower bounds for non-convex stochastic optimization
- Momentum-based variance-reduced proximal stochastic gradient method for composite nonconvex stochastic optimization
Cites work
- A Stochastic Approximation Method
- A proximal stochastic gradient method with progressive variance reduction
- A simplified neuron model as a principal component analyzer
- A trust region algorithm with a worst-case iteration complexity of \(\mathcal{O}(\epsilon ^{-3/2})\) for nonconvex optimization
- Accelerated gradient methods for nonconvex nonlinear and stochastic programming
- Accelerated methods for nonconvex optimization
- Affine conjugate adaptive Newton methods for nonlinear elastomechanics
- Cubic regularization of Newton method and its global performance
- Finding approximate local minima faster than gradient descent
- First-order methods almost always avoid strict saddle points
- Katyusha: the first direct acceleration of stochastic gradient methods
- Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization
- Most tensor problems are NP-hard
- On stationary-point hitting time and ergodicity of stochastic gradient Langevin dynamics
- Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming
- Stochastic dual coordinate ascent methods for regularized loss minimization
Cited in
(20)- Accelerated doubly stochastic gradient descent for tensor CP decomposition
- Stochastic Gauss-Newton algorithm with STORM estimators for nonconvex composite optimization
- On variance reduction for stochastic smooth convex optimization with multiplicative noise
- Variable metric proximal stochastic variance reduced gradient methods for nonconvex nonsmooth optimization
- Variance reduction on general adaptive stochastic mirror descent
- Stochastic variable metric proximal gradient with variance reduction for non-convex composite optimization
- Accelerated stochastic variance reduction for a class of convex optimization problems
- Convergence rates for the stochastic gradient descent method for non-convex objective functions
- A Single Timescale Stochastic Approximation Method for Nested Stochastic Optimization
- Lower bounds for non-convex stochastic optimization
- Fast decentralized nonconvex finite-sum optimization with recursive variance reduction
- Stochastic Nested Variance Reduction for Nonconvex Optimization
- Momentum-based variance-reduced proximal stochastic gradient method for composite nonconvex stochastic optimization
- Variance-reduced reshuffling gradient descent for nonconvex optimization: centralized and distributed algorithms
- Stochastic perturbation of reduced gradient \& GRG methods for nonconvex programming problems
- scientific article; zbMATH DE number 7625189 (Why is no real title available?)
- A stochastic extra-step quasi-Newton method for nonsmooth nonconvex optimization
- Nonconvex stochastic optimization for model reduction
- Stochastic variance-reduced cubic regularization methods
- A linearly convergent stochastic recursive gradient method for convex optimization
This page was built for publication: Stochastic nested variance reduction for nonconvex optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4969167)