Learning internal representations in an attractor neural network with analogue neurons
DOI10.1088/0954-898X/6/3/004zbMATH Open0830.92002OpenAlexW4252857511MaRDI QIDQ4851671FDOQ4851671
Authors: Daniel J. Amit, Nicolas J.-B. Brunel
Publication date: 15 January 1996
Published in: Network: Computation in Neural Systems (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1088/0954-898x/6/3/004
Recommendations
simulationsdifferent time scalesafferent currentsexternal synaptic inputslearning attractor neural networkoutput spike ratesunsupervised, analogue Hebbian process
Learning and adaptive systems in artificial intelligence (68T05) Neural networks for/in biological studies, artificial life and related topics (92B20)
Cited In (7)
- Title not available (Why is that?)
- Reducing a cortical network to a Potts model yields storage capacity estimates
- An autoassociative neural network model of paired-associate learning
- A hierarchical dynamical map as a basic frame for cortical mapping and its application to priming
- Anatomical constraints on lateral competition in columnar cortical architectures
- Adequate input for learning in attractor neural networks
- Modeling and estimating recall processing capacity: sensitivity and diagnostic utility in application to mild cognitive impairment
This page was built for publication: Learning internal representations in an attractor neural network with analogue neurons
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4851671)