A Learning Framework for Winner-Take-All Networks with Stochastic Synapses
From MaRDI portal
Publication:5157187
DOI10.1162/neco_a_01080zbMath1480.92015arXiv1708.04251OpenAlexW2963569875WikidataQ52322425 ScholiaQ52322425MaRDI QIDQ5157187
Gert Cauwenberghs, Hesham Mostafa
Publication date: 12 October 2021
Published in: Neural Computation (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1708.04251
Neural networks for/in biological studies, artificial life and related topics (92B20) Regularization by noise (60H50)
Uses Software
Cites Work
- Unnamed Item
- Learning in the machine: random backpropagation and the deep learning channel
- Simple statistical gradient-following algorithms for connectionist reinforcement learning
- Training Products of Experts by Minimizing Contrastive Divergence
- Bayesian Spiking Neurons I: Inference
- Equivalence of Backpropagation and Contrastive Hebbian Learning in a Layered Network
- Rhythmic Inhibition Allows Neural Networks to Search for Maximally Consistent States
This page was built for publication: A Learning Framework for Winner-Take-All Networks with Stochastic Synapses