How Optimal Stimuli for Sensory Neurons Are Constrained by Network Architecture
From MaRDI portal
Publication:5453533
DOI10.1162/NECO.2007.11-05-076zbMATH Open1133.92002OpenAlexW2117721609WikidataQ51899595 ScholiaQ51899595MaRDI QIDQ5453533FDOQ5453533
Kechen Zhang, Christopher Dimattina
Publication date: 2 April 2008
Published in: Neural Computation (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1162/neco.2007.11-05-076
Recommendations
- Optimal Neuronal Tuning for Finite Stimulus Spaces
- Information maximization explains the sparseness of presynaptic neural response
- Metabolic cost of neuronal information in an empirical stimulus-response model
- A stimulus-dependent connectivity analysis of neuronal networks
- Switching neuronal state: optimal stimuli revealed using a stochastically-seeded gradient algorithm
Cites Work
- New conditions for global stability of neural networks with application to linear and quadratic programming problems
- Neurons with graded response have collective computational properties like those of two-state neurons.
- Characteristics of Random Nets of Analog Neuron-Like Elements
- New Conditions on Global Stability of Cohen-Grossberg Neural Networks
- The alopex process: Visual receptive fields by response feedback
Cited In (4)
This page was built for publication: How Optimal Stimuli for Sensory Neurons Are Constrained by Network Architecture
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5453533)