Hardness results for neural network approximation problems
From MaRDI portal
Publication:1603592
DOI10.1016/S0304-3975(01)00057-3zbMath0997.68098MaRDI QIDQ1603592
Shai Ben-David, Bartlett, Peter L.
Publication date: 15 July 2002
Published in: Theoretical Computer Science (Search for Journal in Brave)
Related Items
Deep learning: a statistical viewpoint, Some connections between learning and optimization, Convolution hierarchical deep-learning neural networks (C-HiDeNN): finite elements, isogeometric analysis, tensor decomposition, and beyond, Rank correlation estimators and their limiting distributions, Loading Deep Networks Is Hard: The Pyramidal Case, Unnamed Item, Topological properties of the set of functions generated by neural networks of fixed size, Theory of Classification: a Survey of Some Recent Advances, On approximate learning by multi-layered feedforward circuits
Cites Work
- Unnamed Item
- Unnamed Item
- On the complexity of polyhedral separability
- Optimization, approximation, and complexity classes
- Decision theoretic generalizations of the PAC model for neural net and other learning applications
- The hardness of approximation: Gap location
- The hardness of approximate optima in lattices, codes, and systems of linear equations
- On the difficulty of approximately maximizing agreements.
- Robust trainability of single neurons
- Cryptographic limitations on learning Boolean formulae and finite automata
- Strong universal consistency of neural network classifiers
- The computational intractability of training sigmoidal neural networks
- Efficient agnostic learning of neural networks with bounded fan-in