Provable Training of a ReLU Gate with an Iterative Non-Gradient Algorithm
From MaRDI portal
Publication:6340320
DOI10.1016/J.NEUNET.2022.03.040arXiv2005.04211OpenAlexW4223985028MaRDI QIDQ6340320FDOQ6340320
Sayar Karmakar, Anirbit Mukherjee
Publication date: 8 May 2020
Abstract: In this work, we demonstrate provable guarantees on the training of a single ReLU gate in hitherto unexplored regimes. We give a simple iterative stochastic algorithm that can train a ReLU gate in the realizable setting in linear time while using significantly milder conditions on the data distribution than previous such results. Leveraging certain additional moment assumptions, we also show a first-of-its-kind approximate recovery of the true label generating parameters under an (online) data-poisoning attack on the true labels, while training a ReLU gate by the same algorithm. Our guarantee is shown to be nearly optimal in the worst case and its accuracy of recovering the true weight degrades gracefully with increasing probability of attack and its magnitude. For both the realizable and the non-realizable cases as outlined above, our analysis allows for mini-batching and computes how the convergence time scales with the mini-batch size. We corroborate our theorems with simulation results which also bring to light a striking similarity in trajectories between our algorithm and the popular S.G.D. algorithm - for which similar guarantees as here are still unknown.
Full work available at URL: https://doi.org/10.1016/j.neunet.2022.03.040
Recommendations
- Provable approximation properties for deep neural networks
- Gradient explosion free algorithm for training recurrent neural networks
- Provably training overparameterized neural network classifiers with non-convex constraints
- Nonlinear approximation and (deep) ReLU networks
- A proof of convergence for stochastic gradient descent in the training of artificial neural networks with ReLU activation for constant target functions
- Convergence analysis for gradient flows in the training of artificial neural networks with ReLU activation
- Constructive deep ReLU neural network approximation
- Towards Lower Bounds on the Depth of ReLU Neural Networks
Cites Work
Cited In (1)
This page was built for publication: Provable Training of a ReLU Gate with an Iterative Non-Gradient Algorithm
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6340320)