Learning Precise Spike Train–to–Spike Train Transformations in Multilayer Feedforward Neuronal Networks
From MaRDI portal
Publication:5380422
DOI10.1162/NECO_A_00829zbMATH Open1414.92005arXiv1412.4210OpenAlexW2293092608WikidataQ50529635 ScholiaQ50529635MaRDI QIDQ5380422FDOQ5380422
Authors: Arunava Banerjee
Publication date: 4 June 2019
Published in: Neural Computation (Search for Journal in Brave)
Abstract: We derive a synaptic weight update rule for learning temporally precise spike train to spike train transformations in multilayer feedforward networks of spiking neurons. The framework, aimed at seamlessly generalizing error backpropagation to the deterministic spiking neuron setting, is based strictly on spike timing and avoids invoking concepts pertaining to spike rates or probabilistic models of spiking. The derivation is founded on two innovations. First, an error functional is proposed that compares the spike train emitted by the output neuron of the network to the desired spike train by way of their putative impact on a virtual postsynaptic neuron. This formulation sidesteps the need for spike alignment and leads to closed form solutions for all quantities of interest. Second, virtual assignment of weights to spikes rather than synapses enables a perturbation analysis of individual spike times and synaptic weights of the output as well as all intermediate neurons in the network, which yields the gradients of the error functional with respect to the said entities. Learning proceeds via a gradient descent mechanism that leverages these quantities. Simulation experiments demonstrate the efficacy of the proposed learning framework. The experiments also highlight asymmetries between synapses on excitatory and inhibitory neurons.
Full work available at URL: https://arxiv.org/abs/1412.4210
Recommendations
- Learning spatiotemporally encoded pattern transformations in structured spiking neural networks
- Training spiking neural networks in the strong coupling regime
- Spiking neural networks for cortical neuronal spike train decoding
- An exact mapping from ReLU networks to spiking neural networks
- Training much deeper spiking neural networks with a small number of time-steps
- Supervised learning in multilayer spiking neural networks
- Robust learning in SpikeProp
Cites Work
- Learning representations by back-propagating errors
- A novel spike distance
- Spiking Neuron Models
- Error-backpropagation in temporally encoded networks of spiking neurons
- Supervised learning in multilayer spiking neural networks
- On the Phase-Space Dynamics of Systems of Spiking Neurons. I: Model and Experiments
- A gradient descent rule for spiking neurons emitting multiple spikes
- Supervised Learning in Spiking Neural Networks with ReSuMe: Sequence Learning, Classification, and Spike Shifting
- Connectomic constraints on computation in feedforward networks of spiking neurons
- On the sensitive dependence on initial conditions of the dynamics of networks of spiking neurons
Cited In (9)
- An exact mapping from ReLU networks to spiking neural networks
- Multilevel Artificial Neural Network Training for Spatially Correlated Learning
- On the Algorithmic Power of Spiking Neural Networks
- Title not available (Why is that?)
- A remark on the error-backpropagation learning algorithm for spiking neural networks
- Modeling the learning of a spiking neural network with synaptic delays
- Spike-Timing Error Backpropagation in Theta Neuron Networks
- SuperSpike: supervised learning in multilayer spiking neural networks
- Robust spike-train learning in spike-event based weight update
This page was built for publication: Learning Precise Spike Train–to–Spike Train Transformations in Multilayer Feedforward Neuronal Networks
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5380422)