Lower Bounds for the Computational Power of Networks of Spiking Neurons

From MaRDI portal
Publication:4880197

DOI10.1162/neco.1996.8.1.1zbMath0843.68104OpenAlexW1967983190MaRDI QIDQ4880197

Wolfgang Maass

Publication date: 5 June 1996

Published in: Neural Computation (Search for Journal in Brave)

Full work available at URL: https://doi.org/10.1162/neco.1996.8.1.1




Related Items (22)

General-Purpose Computation with Neural Networks: A Survey of Complexity Theoretic ResultsAdaptive Synchronization of Activities in a Recurrent NetworkRobust spike-train learning in spike-event based weight updateAdaptive learning rate of SpikeProp based on weight convergence analysisLearning hierarchically-structured conceptsSpike-Timing-Dependent ConstructionPositive Neural Networks in Discrete Time Implement Monotone-Regular BehaviorsToward Unified Hybrid Simulation Techniques for Spiking Neural NetworksParallel computation in spiking neural netsOn the relevance of time in neural computation and learningReal-Time Computing Without Stable States: A New Framework for Neural Computation Based on PerturbationsRobust learning in SpikePropInvestigating the computational power of spiking neurons with non-standard behaviorsSpike-Timing Error Backpropagation in Theta Neuron NetworksAccelerating Event-Driven Simulation of Spiking Neurons with Multiple Synaptic Time ConstantsCounting to Ten with Two Fingers: Compressed Counting with Spiking Neurons.Learning Beyond Finite Memory in Recurrent Networks of Spiking NeuronsOn the Algorithmic Power of Spiking Neural NetworksOn computation with pulsesSpiking neural nets with symbolic internal stateShattering All Sets of ‘k’ Points in “General Position” Requires (k — 1)/2 ParametersSpiking neurons and the induction of finite state machines.



Cites Work


This page was built for publication: Lower Bounds for the Computational Power of Networks of Spiking Neurons