A heuristic adaptive fast gradient method in stochastic optimization problems
From MaRDI portal
Abstract: In this paper, we present a heuristic adaptive fast gradient method. We show that in practice our method has a better convergence rate than popular today optimization methods. Moreover, we justify our method and point out some problems that do not allow us to obtain theoretical estimates.
Recommendations
- Adaptive sampling for incremental optimization using stochastic gradient descent
- Accelerated and unaccelerated stochastic gradient descent in model generality
- scientific article; zbMATH DE number 5221408
- scientific article; zbMATH DE number 1928800
- An optimal method for stochastic composite optimization
Cites work
- scientific article; zbMATH DE number 4141383 (Why is no real title available?)
- scientific article; zbMATH DE number 5060482 (Why is no real title available?)
- Adaptive subgradient methods for online learning and stochastic optimization
- Concentration inequalities. A nonasymptotic theory of independence
- Deep learning
- Fast gradient descent for convex minimization problems with an oracle producing a ( , L)-model of function at the requested point
- First-order methods of smooth convex optimization with inexact oracle
- Symmetrization approach to concentration inequalities for empirical processes.
- Validation analysis of mirror descent stochastic approximation method
- Variance-based extragradient methods with line search for stochastic variational inequalities
Cited in
(8)- Adaptive sampling for incremental optimization using stochastic gradient descent
- On adaptive stochastic heavy ball momentum for solving linear systems
- scientific article; zbMATH DE number 3906251 (Why is no real title available?)
- Adaptive gradient-free method for stochastic optimization
- Stochastic gradient method with Barzilai-Borwein step for unconstrained nonlinear optimization
- Adaptive methods using element-wise \(p\)th power of stochastic gradient for nonconvex optimization in deep neural networks
- An improvement of stochastic gradient descent approach for mean-variance portfolio optimization problem
- Accelerated and unaccelerated stochastic gradient descent in model generality
This page was built for publication: A heuristic adaptive fast gradient method in stochastic optimization problems
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2207619)