Generalization learning in a perceptron with binary synapses
From MaRDI portal
Abstract: We consider the generalization problem for a perceptron with binary synapses, implementing the Stochastic Belief-Propagation-Inspired (SBPI) learning algorithm which we proposed earlier, and perform a mean-field calculation to obtain a differential equation which describes the behaviour of the device in the limit of a large number of synapses N. We show that the solving time of SBPI is of order N*sqrt(log(N)), while the similar, well-known clipped perceptron (CP) algorithm does not converge to a solution at all in the time frame we considered. The analysis gives some insight into the ongoing process and shows that, in this context, the SBPI algorithm is equivalent to a new, simpler algorithm, which only differs from the CP algorithm by the addition of a stochastic, unsupervised meta-plastic reinforcement process, whose rate of application must be less than sqrt(2/(pi * N)) for the learning to be achieved effectively. The analytical results are confirmed by simulations.
Recommendations
- Learning and generalization errors for the 2D binary perceptron.
- Generalization performance of Bayes optimal classification algorithm for learning a perceptron
- Directed drift: A new linear threshold algorithm for learning binary weights on-line
- Learning strategy for the binary perceptron
- Learning in the multilayer perceptron
Cites work
- scientific article; zbMATH DE number 1569117 (Why is no real title available?)
- scientific article; zbMATH DE number 3231758 (Why is no real title available?)
- Algorithmic Learning Theory
- Capacity of neural networks with discrete synaptic couplings
- Learning curves of the clipped Hebb rule for networks with binary weights
- On-line learning in the Ising perceptron.
- Optimal generalization in perceptions
- Statistical mechanics of learning
Cited in
(10)- Frozen 1-RSB structure of the symmetric Ising perceptron
- Shaping the learning landscape in neural networks around wide flat minima
- Generalization and learning error for nonlinear perceptron.
- Local entropy as a measure for sampling solutions in constraint satisfaction problems
- Learning and generalization errors for the 2D binary perceptron.
- A Max-Sum algorithm for training discrete neural networks
- Clustering of solutions in the symmetric binary perceptron
- A multifractal phase-space analysis of perceptrons with biased patterns
- Active online learning in the binary perceptron problem
- Generalizing with perceptrons in the case of structured phase- and pattern-spaces
This page was built for publication: Generalization learning in a perceptron with binary synapses
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q1040712)