Disambiguating Visual Motion Through Contextual Feedback Modulation
From MaRDI portal
Publication:3160475
DOI10.1162/0899766041732404zbMath1084.68906WikidataQ45033858 ScholiaQ45033858MaRDI QIDQ3160475
Publication date: 9 February 2005
Published in: Neural Computation (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1162/0899766041732404
68T45: Machine vision and scene understanding
Related Items
A Bio-Inspired, Computational Model Suggests Velocity Gradients of Optic Flow Locally Encode Ordinal Depth at Surface Borders and Globally They Encode Self-Motion, Computing with a Canonical Neural Circuits Model with Pool Normalization and Modulating Feedback, Motion detection, noise reduction, texture suppression, and contour enhancement by spatiotemporal Gabor filters with surround inhibition, A neurally plausible model of the dynamics of motion integration in smooth eye pursuit based on recursive Bayesian estimation, Globally consistent depth sorting of overlapping 2D surfaces in a model using local recurrent interactions, Bifurcation analysis applied to a model of motion integration with a multistable stimulus, Biologically plausible learning in neural networks with modulatory feedback, Interaction of feedforward and feedback streams in visual cortex in a firing-rate model of columnar computations
Cites Work