Strong error analysis for stochastic gradient descent optimization algorithms
From MaRDI portal
Publication:4964091
DOI10.1093/imanum/drz055zbMath1460.65071arXiv1801.09324OpenAlexW2786773456WikidataQ126576349 ScholiaQ126576349MaRDI QIDQ4964091
Ariel Neufeld, Arnulf Jentzen, Philippe von Wurstemberger, Benno Kuckuck
Publication date: 24 February 2021
Published in: IMA Journal of Numerical Analysis (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1801.09324
Related Items (8)
A proof of convergence for gradient descent in the training of artificial neural networks for constant target functions ⋮ A proof of convergence for stochastic gradient descent in the training of artificial neural networks with ReLU activation for constant target functions ⋮ Full error analysis for the training of deep neural networks ⋮ Stochastic gradient descent with noise of machine learning type. I: Discrete time analysis ⋮ Overall error analysis for the training of deep neural networks via stochastic gradient descent with random initialisation ⋮ Concentration inequalities for additive functionals: a martingale approach ⋮ Analysis of stochastic gradient descent in continuous time ⋮ Machine learning approximation algorithms for high-dimensional fully nonlinear partial differential equations and second-order backward stochastic differential equations
This page was built for publication: Strong error analysis for stochastic gradient descent optimization algorithms