The computational intractability of training sigmoidal neural networks
From MaRDI portal
Publication:4336293
DOI10.1109/18.567673zbMATH Open0874.68255OpenAlexW2108030384MaRDI QIDQ4336293FDOQ4336293
Authors: Lee Kenneth Jones
Publication date: 10 November 1997
Published in: IEEE Transactions on Information Theory (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1109/18.567673
Recommendations
Cited In (19)
- Ill-Conditioning in Neural Network Training Problems
- A mathematical solution to a network construction problem.
- Computational limitations on training sigmoid neural networks
- Title not available (Why is that?)
- Title not available (Why is that?)
- An unfeasibility view of neural network learning
- On approximate learning by multi-layered feedforward circuits
- Algorithmic Learning Theory
- Title not available (Why is that?)
- Loading Deep Networks Is Hard: The Pyramidal Case
- Local greedy approximation for nonlinear regression and neural network training.
- Training a Single Sigmoidal Neuron Is Hard
- Title not available (Why is that?)
- Some problems in the theory of ridge functions
- Ridgelets: estimating with ridge functions
- Hardness results for neural network approximation problems
- On the complexity of loading shallow neural networks
- Title not available (Why is that?)
- CONVERGENCE OF A LEAST‐SQUARES MONTE CARLO ALGORITHM FOR AMERICAN OPTION PRICING WITH DEPENDENT SAMPLE DATA
This page was built for publication: The computational intractability of training sigmoidal neural networks
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4336293)