Stopping rules for gradient methods for non-convex problems with additive noise in gradient
From MaRDI portal
Publication:6051170
DOI10.1007/s10957-023-02245-wzbMath1522.90141arXiv2205.07544OpenAlexW4379010939MaRDI QIDQ6051170
Boris T. Polyak, Ilya A. Kuruzov, Fedor S. Stonyakin
Publication date: 19 September 2023
Published in: Journal of Optimization Theory and Applications (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2205.07544
Numerical mathematical programming methods (65K05) Nonconvex programming, global optimization (90C26) Methods of reduced gradient type (90C52)
Related Items
Cites Work
- Unnamed Item
- First-order methods of smooth convex optimization with inexact oracle
- Universal gradient methods for convex optimization problems
- A stopping rule in iteration procedures for solving ill-posed problems
- Cubic regularization of Newton method and its global performance
- Smooth Optimization with Approximate Gradient
- New versions of Newton method: step-size choice, convergence domain and under-determined equations
- Fit without fear: remarkable mathematical phenomena of deep learning through the prism of interpolation
This page was built for publication: Stopping rules for gradient methods for non-convex problems with additive noise in gradient