Decreasing the size of the restricted Boltzmann machine
From MaRDI portal
Publication:5154145
DOI10.1162/NECO_A_01176zbMATH Open1476.68238arXiv1807.02999OpenAlexW2860169200WikidataQ91588978 ScholiaQ91588978MaRDI QIDQ5154145FDOQ5154145
Authors: Yohei Saito, Takuya Kato
Publication date: 1 October 2021
Published in: Neural Computation (Search for Journal in Brave)
Abstract: We propose a method to decrease the number of hidden units of the restricted Boltzmann machine while avoiding decrease of the performance measured by the Kullback-Leibler divergence. Then, we demonstrate our algorithm by using numerical simulations.
Full work available at URL: https://arxiv.org/abs/1807.02999
Recommendations
Cites Work
- Reducing the Dimensionality of Data with Neural Networks
- Title not available (Why is that?)
- Training Products of Experts by Minimizing Contrastive Divergence
- A Fast Learning Algorithm for Deep Belief Nets
- Representational Power of Restricted Boltzmann Machines and Deep Belief Networks
- An efficient learning procedure for deep Boltzmann machines
- Solving the quantum many-body problem with artificial neural networks
- Learning compositional representations of interacting systems with restricted Boltzmann machines: comparative study of lattice proteins
Uses Software
This page was built for publication: Decreasing the size of the restricted Boltzmann machine
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5154145)