A linear relation between input and first layer in neural networks
From MaRDI portal
Publication:2294577
DOI10.1007/S10472-019-09657-3zbMATH Open1430.68276OpenAlexW2966918613WikidataQ127390091 ScholiaQ127390091MaRDI QIDQ2294577FDOQ2294577
Authors: Sebastián Alberto Grillo
Publication date: 11 February 2020
Published in: Annals of Mathematics and Artificial Intelligence (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10472-019-09657-3
Recommendations
- Minimal feedforward parity networks using threshold gates
- On the capabilities of multilayer perceptrons
- Rational approximation techniques for analysis of neural networks
- Exact classification with two-layer neural nets in \(n\) dimensions
- Approximation properties of some two-layer feedforward neural networks
Cites Work
- Reducing the Dimensionality of Data with Neural Networks
- Learning deep architectures for AI
- Introduction to algorithms.
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- A theory of the learnable
- On the Size of Weights for Threshold Gates
- Deep vs. shallow networks: an approximation theory perspective
Uses Software
This page was built for publication: A linear relation between input and first layer in neural networks
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2294577)