Neural Representation of Probabilistic Information
From MaRDI portal
Abstract: It has been proposed that populations of neurons process information in terms of probability density functions (PDFs) of analog variables. Such analog variables range, for example, from target luminance and depth on the sensory interface to eye position and joint angles on the motor output side. The requirement that analog variables must be processed leads inevitably to a probabilistic description, while the limited precision and lifetime of the neuronal processing units leads naturally to a population representation of information. We show how a time-dependent probability density, residing in a specified function space of dimension D, may be decoded from the neuronal activities in a population as a linear combination of certain decoding functions, with coefficients given by the N firing rates (generally with D << N). We show how the neuronal encoding process may be described by projecting a set of complementary encoding functions on the probability density, and passing the result through a rectifying nonlinear activation function. We show how both encoders and decoders may be determined by minimizing cost functions that quantify the inaccuracy of the representation. Expressing a given computation in terms of manipulation and transformation of probabilities, we show how this representation leads to a neural circuit that can carry out the required computation within a consistent Bayesian framework, with the synaptic weights being explicitly generated in terms of encoders, decoders, conditional probabilities, and priors.
Recommendations
Cites work
- A PDF model of populations of Purkinje cells: Nonlinear interactions and high variability
- Connectionist learning of belief networks
- Developing and applying a toolkit from a general neurocomputational framework
- Information Processing in Dendritic Trees
- Varieties of Helmholtz machine
- Vector reconstruction from firing rates
Cited in
(11)- Bayesian Spiking Neurons I: Inference
- Neural representation of probabilities for Bayesian inference
- Linearization of excitatory synaptic integration at no extra cost
- Dual coding hypotheses for neural information representation
- Designing neural networks that process mean values of random variables
- Advances in Neural Networks – ISNN 2005
- Approximate, computationally efficient online learning in Bayesian spiking neurons
- Neuromorphic features of probabilistic neural networks
- Fast Population Coding
- scientific article; zbMATH DE number 1928786 (Why is no real title available?)
- scientific article; zbMATH DE number 5314896 (Why is no real title available?)
This page was built for publication: Neural Representation of Probabilistic Information
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4814200)