Training a Single Sigmoidal Neuron Is Hard
From MaRDI portal
Publication:4409384
DOI10.1162/089976602760408035zbMath1060.68099WikidataQ52112387 ScholiaQ52112387MaRDI QIDQ4409384
Publication date: 14 October 2003
Published in: Neural Computation (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1162/089976602760408035
68T05: Learning and adaptive systems in artificial intelligence
Related Items
Loading Deep Networks Is Hard: The Pyramidal Case, On the Nonlearnability of a Single Spiking Neuron, Long-Range Out-of-Sample Properties of Autoregressive Neural Networks
Cites Work
- Computational limitations on training sigmoid neural networks
- On the complexity of loading shallow neural networks
- On the complexity of polyhedral separability
- Feedforward nets for interpolation and classification
- The densest hemisphere problem
- The hardness of approximate optima in lattices, codes, and systems of linear equations
- Robust trainability of single neurons
- On the geometric separability of Boolean functions
- Learnability and the Vapnik-Chervonenkis dimension
- Computational limitations on learning from examples
- The computational intractability of training sigmoidal neural networks
- Learning representations by back-propagating errors