Adrian Riekert

From MaRDI portal
(Redirected from Person:2079547)



List of research outcomes

This list is not complete and representing at the moment only items from zbMATH Open and arXiv. We are working on additional sources - please check back here soon!

PublicationDate of PublicationType
Non-convergence to global minimizers for Adam and stochastic gradient descent optimization and constructions of local minimizers in the training of artificial neural networks
SIAM/ASA Journal on Uncertainty Quantification
2025-09-30Paper
On the existence of global minima and convergence analyses for gradient descent methods in the training of deep neural networks
Journal of Machine Learning
2025-07-22Paper
Strong overall error analysis for the training of artificial neural networks via random initializations
Communications in Mathematics and Statistics
2024-10-10Paper
A proof of the corrected Sister Beiter cyclotomic coefficient conjecture inspired by Zhao and Zhang2023-04-18Paper
Deep neural network approximation of composite functions without the curse of dimensionality2023-04-12Paper
Algorithmically Designed Artificial Neural Networks (ADANNs): Higher order deep operator learning for parametric partial differential equations2023-02-07Paper
Convergence to good non-optimal critical points in the training of neural networks: Gradient descent optimization with one random initialization overcomes all bad non-global local minima with high probability2022-12-26Paper
Convergence analysis for gradient flows in the training of artificial neural networks with ReLU activation
Journal of Mathematical Analysis and Applications
2022-09-30Paper
Convergence rates for empirical measures of Markov chains in dual and Wasserstein distances
Statistics & Probability Letters
2022-08-30Paper
A proof of convergence for stochastic gradient descent in the training of artificial neural networks with ReLU activation for constant target functions
ZAMP. Zeitschrift für angewandte Mathematik und Physik
2022-08-25Paper
A proof of convergence for gradient descent in the training of artificial neural networks for constant target functions
Journal of Complexity
2022-06-17Paper
On the existence of infinitely many realization functions of non-global local minima in the training of artificial neural networks with ReLU activation2022-02-23Paper
Convergence analysis for gradient flows in the training of artificial neural networks with ReLU activation
(available as arXiv preprint)
2021-07-09Paper
A proof of convergence for stochastic gradient descent in the training of artificial neural networks with ReLU activation for constant target functions
(available as arXiv preprint)
2021-04-01Paper


Research outcomes over time


This page was built for person: Adrian Riekert