Mean-field inference methods for neural networks
From MaRDI portal
Publication:5059670
DOI10.1088/1751-8121/ab7f65OpenAlexW3099499532MaRDI QIDQ5059670
Publication date: 16 January 2023
Published in: Journal of Physics A: Mathematical and Theoretical (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1911.00890
Related Items
Align, then memorise: the dynamics of learning with feedback alignment* ⋮ Align, then memorise: the dynamics of learning with feedback alignment* ⋮ Hidden unit specialization in layered neural networks: ReLU vs. sigmoidal activation ⋮ A dynamical mean-field theory for learning in restricted Boltzmann machines ⋮ Dynamical mean-field theory for stochastic gradient descent in Gaussian mixture classification*
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- An iterative construction of solutions of the TAP equations for the Sherrington-Kirkpatrick model
- Thermodynamics of restricted Boltzmann machines and related learning dynamics
- The Parisi formula
- Statistical Mechanics of Learning
- Entropy landscape of solutions in the binary perceptron problem
- Phase Transitions and Sample Complexity in Bayes-Optimal Matrix Factorization
- A theory of solving TAP equations for Ising models with general invariant random matrices
- Reducing the Dimensionality of Data with Neural Networks
- Training Products of Experts by Minimizing Contrastive Divergence
- Large-Scale Machine Learning with Stochastic Gradient Descent
- Statistical mechanics of complex neural systems and high dimensional data
- Statistical mechanics of unsupervised feature learning in a restricted Boltzmann machine with binary synapses
- Constrained low-rank matrix estimation: phase transitions, approximate message passing and applications
- Perceptron capacity revisited: classification ability for correlated patterns
- Graphical Models, Exponential Families, and Variational Inference
- Information, Physics, and Computation
- The Nishimori line and Bayesian statistics
- The limitations of deterministic Boltzmann machine learning
- Mean-field equations for spin models with orthogonal interaction matrices
- On-Line Learning in Neural Networks
- Boltzmann Machine and Mean-Field Approximation for Structured Sparse Decompositions
- Statistical Physics of Spin Glasses and Information Processing
- 10.1162/153244303321897690
- Learning by on-line gradient descent
- A mean field view of the landscape of two-layer neural networks
- Spin-glass theory for pedestrians
- Approximate survey propagation for statistical inference
- Large deviation analysis of function sensitivity in random deep neural networks
- Neural networks and physical systems with emergent collective computational abilities.
- The space of interactions in neural network models
- Algorithmic Learning Theory
- A Fast Learning Algorithm for Deep Belief Nets
- Understanding Machine Learning
- Learning from correlated patterns by simple perceptrons
- DISTRIBUTION OF EIGENVALUES FOR SOME SETS OF RANDOM MATRICES
- A CDMA multiuser detection algorithm on the basis of belief propagation
- Statistical theory of superlattices
- A Stochastic Approximation Method
- Compressed sensing
- Approximation by superpositions of a sigmoidal function