Modelling memory functions with recurrent neural networks consisting of input compensation units: I. Static situations
From MaRDI portal
Publication:2459153
DOI10.1007/s00422-006-0137-xzbMath1122.92011OpenAlexW1998909804WikidataQ51925534 ScholiaQ51925534MaRDI QIDQ2459153
Wolf-Jürgen Beyn, Simone Kühn, Holk Cruse
Publication date: 5 November 2007
Published in: Biological Cybernetics (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s00422-006-0137-x
Related Items (5)
Compact internal representation of dynamic situations: neural network implementing the causality principle ⋮ Supervised Spike-Timing-Dependent Plasticity: A Spatiotemporal Neuronal Learning Rule for Function Approximation and Decisions ⋮ Elements for a general memory structure: properties of recurrent neural networks used to form situation models ⋮ Modelling memory functions with recurrent neural networks consisting of input compensation units: II. Dynamic situations ⋮ Selforganizing memory: Active learning of landmarks used for navigation
Cites Work
- Dynamics of pattern formation in lateral-inhibition type neural fields
- Modelling memory functions with recurrent neural networks consisting of input compensation units: II. Dynamic situations
- Equivalence of Backpropagation and Contrastive Hebbian Learning in a Layered Network
- Real-Time Computing Without Stable States: A New Framework for Neural Computation Based on Perturbations
- A mathematical theory of the functional dynamics of cortical and thalamic nervous tissue
- Neural networks and physical systems with emergent collective computational abilities.
- Neurons with graded response have collective computational properties like those of two-state neurons.
- Unnamed Item
This page was built for publication: Modelling memory functions with recurrent neural networks consisting of input compensation units: I. Static situations