Approximation Algorithms for Training One-Node ReLU Neural Networks
From MaRDI portal
Publication:5103231
DOI10.1109/TSP.2020.3039360OpenAlexW3109377487MaRDI QIDQ5103231FDOQ5103231
Authors: Guanyi Wang, Yao Xie, Santanu S. Dey
Publication date: 23 September 2022
Published in: IEEE Transactions on Signal Processing (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1109/tsp.2020.3039360
Cited In (5)
- Principled deep neural network training through linear programming
- Towards Lower Bounds on the Depth of ReLU Neural Networks
- Spline representation and redundancies of one-dimensional ReLU neural network models
- Neural networks with linear threshold activations: structure and algorithms
- Neural networks with linear threshold activations: structure and algorithms
This page was built for publication: Approximation Algorithms for Training One-Node ReLU Neural Networks
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5103231)