Training restricted Boltzmann machines: an introduction
From MaRDI portal
Publication:898295
DOI10.1016/j.patcog.2013.05.025zbMath1326.68220OpenAlexW2018168021WikidataQ57254703 ScholiaQ57254703MaRDI QIDQ898295
Publication date: 8 December 2015
Published in: Pattern Recognition (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.patcog.2013.05.025
Markov chainsneural networksGibbs samplingMarkov random fieldsparallel temperingrestricted Boltzmann machinescontrastive divergence learning
Related Items (17)
Equilibrium and non-equilibrium regimes in the learning of restricted Boltzmann machines* ⋮ Features Selection as a Nash-Bargaining Solution: Applications in Online Advertising and Information Systems ⋮ Statistical physics and representations in real and artificial neural networks ⋮ Spatiotemporal-textual point processes for crime linkage detection ⋮ Topologically ordered feature extraction based on sparse group restricted Boltzmann machines ⋮ An adaptive deep Q-learning strategy for handwritten digit recognition ⋮ Bandgap optimization in combinatorial graphs with tailored ground states: application in quantum annealing ⋮ Unsupervised learning of disentangled representations in deep restricted kernel machines with orthogonality constraints ⋮ Markov chain stochastic DCA and applications in deep learning with PDEs regularization ⋮ Deep Restricted Kernel Machines Using Conjugate Feature Duality ⋮ A bound for the convergence rate of parallel tempering for sampling restricted Boltzmann machines ⋮ Information integration from distributed threshold-based interactions ⋮ Algorithms for estimating the partition function of restricted Boltzmann machines ⋮ Class sparsity signature based restricted Boltzmann machine ⋮ Learning and retrieval operational modes for three-layer restricted Boltzmann machines ⋮ Unnamed Item ⋮ Replica analysis of the lattice-gas restricted Boltzmann machine partition function
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Refinements of Universal Approximation Results for Deep Belief Networks and Restricted Boltzmann Machines
- Learning a Generative Model of Images by Factoring Appearance and Shape
- Bounding the Bias of Contrastive Divergence Learning
- Reducing the Dimensionality of Data with Neural Networks
- Training Products of Experts by Minimizing Contrastive Divergence
- Learning Deep Architectures for AI
- Representational Power of Restricted Boltzmann Machines and Deep Belief Networks
- Stochastic Relaxation, Gibbs Distributions, and the Bayesian Restoration of Images
- Markov Chains
- Justifying and Generalizing Contrastive Divergence
- A Fast Learning Algorithm for Deep Belief Nets
- Monte Carlo sampling methods using Markov chains and their applications
This page was built for publication: Training restricted Boltzmann machines: an introduction