Neural networks with linear threshold activations: structure and algorithms
DOI10.1007/S10107-023-02016-5zbMATH Open1545.68113MaRDI QIDQ6589753FDOQ6589753
Authors: Sammy Khalife, Hongyu Cheng, Amitabh Basu
Publication date: 20 August 2024
Published in: Mathematical Programming. Series A. Series B (Search for Journal in Brave)
Artificial neural networks and deep learning (68T07) Polyhedral combinatorics, branch-and-bound, branch-and-cut (90C57) Analysis of algorithms and problem complexity (68Q25) Neural networks for/in biological studies, artificial life and related topics (92B20)
Cites Work
- Size--Depth Tradeoffs for Threshold Circuits
- Approximation by superpositions of a sigmoidal function
- Neural Network Learning
- Title not available (Why is that?)
- Approximating threshold circuits by rational functions
- Robust trainability of single neurons
- Super-linear gate and super-quadratic wire lower bounds for depth-two and depth-three threshold circuits
- Complexity of training ReLU neural network
- Approximation Algorithms for Training One-Node ReLU Neural Networks
- The Computational Complexity of ReLU Network Training Parameterized by Data Dimensionality
This page was built for publication: Neural networks with linear threshold activations: structure and algorithms
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6589753)