Blow up phenomena for gradient descent optimization methods in the training of artificial neural networks

From MaRDI portal
Publication:6418784

arXiv2211.15641MaRDI QIDQ6418784FDOQ6418784


Authors: Davide Gallon, Arnulf Jentzen, Felix Lindner Edit this on Wikidata


Publication date: 28 November 2022

Abstract: In this article we investigate blow up phenomena for gradient descent optimization methods in the training of artificial neural networks (ANNs). Our theoretical analysis is focused on shallow ANNs with one neuron on the input layer, one neuron on the output layer, and one hidden layer. For ANNs with ReLU activation and at least two neurons on the hidden layer we establish the existence of a target function such that there exists a lower bound for the risk values of the critical points of the associated risk function which is strictly greater than the infimum of the image of the risk function. This allows us to demonstrate that every gradient flow trajectory with an initial risk smaller than this lower bound diverges. Furthermore, we analyze and compare various popular types of activation functions with regard to the divergence of gradient flow trajectories and gradient descent trajectories in the training of ANNs and with regard to the closely related question concerning the existence of global minimum points of the risk function.













This page was built for publication: Blow up phenomena for gradient descent optimization methods in the training of artificial neural networks

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6418784)