How to Modify a Neural Network Gradually Without Changing Its Input-Output Functionality
From MaRDI portal
Publication:3562856
DOI10.1162/neco.2009.05-08-781zbMath1214.68280DBLPjournals/neco/DiMattinaZ10OpenAlexW2130652810WikidataQ51782258 ScholiaQ51782258MaRDI QIDQ3562856
Kechen Zhang, Christopher Dimattina
Publication date: 28 May 2010
Published in: Neural Computation (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1162/neco.2009.05-08-781
Related Items (2)
Parameter Identifiability in Statistical Machine Learning: A Review ⋮ Active Data Collection for Efficient Estimation and Comparison of Nonlinear Neural Models
Cites Work
- Unnamed Item
- Reconstructing a neural net from its output
- Modeling inhibition of type II units in the dorsal cochlear nucleus
- Multilayer feedforward networks are universal approximators
- Generalized inverses. Theory and applications.
- Asymptotic Theory of Information-Theoretic Experimental Design
- How Optimal Stimuli for Sensory Neurons Are Constrained by Network Architecture
- Dynamics of Learning Near Singularities in Layered Networks
- A Simplex Method for Function Minimization
- Approximation by superpositions of a sigmoidal function
This page was built for publication: How to Modify a Neural Network Gradually Without Changing Its Input-Output Functionality