Gradient descent provably escapes saddle points in the training of shallow ReLU networks

From MaRDI portal
Publication:6655804






Cites work







This page was built for publication: Gradient descent provably escapes saddle points in the training of shallow ReLU networks

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6655804)