Contour Enhancement, Short Term Memory, and Constancies in Reverberating Neural Networks
From MaRDI portal
Publication:4766857
DOI10.1002/sapm1973523213zbMath0281.92005OpenAlexW4241653134WikidataQ61821785 ScholiaQ61821785MaRDI QIDQ4766857
Publication date: 1973
Published in: Studies in Applied Mathematics (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1002/sapm1973523213
Related Items (39)
A nonlinear compartmental formulation for some classical population interactions ⋮ How laminar frontal cortex and basal ganglia circuits interact to control planned and reactive saccades ⋮ Fast synchronization of perceptual grouping in laminar visual cortical circuits ⋮ How does the brain rapidly learn and reorganize view-invariant and position-invariant object representations in the inferotemporal cortex? ⋮ A global competitive neural network ⋮ A neural model of decision-making by the superior colicullus in an antisaccade task ⋮ KWTA networks and their applications ⋮ Biophysiologically Plausible Implementations of the Maximum Operation ⋮ Space, time and learning in the hippocampus: How fine spatial and temporal scales are expanded into population codes for behavioral control ⋮ A class of convergent neural network dynamics ⋮ Competition, decision, and consensus ⋮ Episodic memory: a hierarchy of spatiotemporal concepts ⋮ Intracellular mechanisms of adaptation and self-regulation in self- organizing networks: The role of chemical transducers ⋮ A Canonical Neural Circuit for Cortical Nonlinear Operations ⋮ A computational learning theory of active object recognition under uncertainty ⋮ Delay for the capacity-simplicity dilemma in associative memory attractor networks ⋮ Computing with a Canonical Neural Circuits Model with Pool Normalization and Modulating Feedback ⋮ Mereology in Engineering and Computer Science ⋮ A unified neural network model of spatiotemporal processing in \(X\) and \(Y\) retinal ganglion cells. I: Analytical results ⋮ Computing with Spikes: The Advantage of Fine-Grained Timing ⋮ Global dynamics of neural nets with infinite gain ⋮ On the development of feature detectors in the visual cortex with applications to learning and reaction-diffusion systems ⋮ An Amplitude Equation Approach to Contextual Effects in Visual Cortex ⋮ Adaptive pattern classification and universal recoding. I: Parallel development and coding of neural feature detectors ⋮ Adaptive pattern classification and universal recoding. II: Feedback, expectation, olfaction, illusions ⋮ Coexistence and local stability of multiple equilibria in neural networks with piecewise linear nondecreasing activation functions ⋮ Pattern formation by the global limits of a nonlinear competitive interaction in n dimensions ⋮ Validating a model for detecting magnetic field intensity using dynamic neural fields ⋮ State-Dependent Computation Using Coupled Recurrent Networks ⋮ Simulated neural dynamics of decision-making in an auditory delayed match-to-sample task ⋮ Existence of a limiting pattern for a system of nonlinear equations describing interpopulation competition ⋮ Massively parallel analog tabu search using neural networks applied to simple plant location problems ⋮ Models of central capacity and concurrency ⋮ Unnamed Item ⋮ A Simple Neural Network Exhibiting Selective Activation of Neuronal Ensembles: From Winner-Take-All to Winners-Share-All ⋮ Interaction of feedforward and feedback streams in visual cortex in a firing-rate model of columnar computations ⋮ Neural population modeling and psychology: a review ⋮ Rectification of correlation by a sigmoid nonlinearity ⋮ Stable habitual domains: Existence and implications
Cites Work
- Unnamed Item
- A neural theory of punishment and avoidance. I: Qualitative theory
- A neural theory of punishment and avoidance. II: Quantitative theory
- Pavlovian Pattern Learning by Nonlinear Neural Networks
- Neural expectation: cerebellar and retinal analogs of cells fired by learnable or unlearned pattern classes
This page was built for publication: Contour Enhancement, Short Term Memory, and Constancies in Reverberating Neural Networks