Network inference with hidden units
From MaRDI portal
Abstract: We derive learning rules for finding the connections between units in stochastic dynamical networks from the recorded history of a ``visible subset of the units. We consider two models. In both of them, the visible units are binary and stochastic. In one model the ``hidden units are continuous-valued, with sigmoidal activation functions, and in the other they are binary and stochastic like the visible ones. We derive exact learning rules for both cases. For the stochastic case, performing the exact calculation requires, in general, repeated summations over an number of configurations that grows exponentially with the size of the system and the data length, which is not feasible for large systems. We derive a mean field theory, based on a factorized ansatz for the distribution of hidden-unit states, which offers an attractive alternative for large systems. We present the results of some numerical calculations that illustrate key features of the two models and, for the stochastic case, the exact and approximate calculations.
Recommendations
- Belief propagation and replicas for inference and learning in a kinetic Ising model with hidden spins
- Stochastic dynamics and learning rules in layered neural networks.
- Connectionist learning of belief networks
- The appropriateness of ignorance in the inverse kinetic Ising model
- scientific article; zbMATH DE number 2033137
Cites work
- scientific article; zbMATH DE number 3567782 (Why is no real title available?)
- scientific article; zbMATH DE number 1273988 (Why is no real title available?)
- scientific article; zbMATH DE number 1149420 (Why is no real title available?)
- scientific article; zbMATH DE number 3446222 (Why is no real title available?)
- A new look at the statistical model identification
- Bayesian reasoning and machine learning.
- Collective properties of neural networks: A statistical physics approach
- Estimating the dimension of a model
- Time-Dependent Statistics of the Ising Model
Cited in
(14)- State sampling dependence of Hopfield network inference
- Belief propagation and replicas for inference and learning in a kinetic Ising model with hidden spins
- Learning of couplings for random asymmetric kinetic Ising models revisited: random correlation matrices and learning curves
- Critical scaling in hidden state inference for linear Langevin dynamics
- Inference for dynamics of continuous variables: the extended Plefka expansion with hidden nodes
- Learning the pseudoinverse solution to network weights
- Counting hidden neural networks
- The appropriateness of ignorance in the inverse kinetic Ising model
- Inferring hidden states in a random kinetic Ising model: replica analysis
- Effects of hidden nodes on network structure inference
- scientific article; zbMATH DE number 5957255 (Why is no real title available?)
- Exact Inferences in a Neural Implementation of a Hidden Markov Model
- scientific article; zbMATH DE number 2033137 (Why is no real title available?)
- Detecting hidden nodes in networks based on random variable resetting method
This page was built for publication: Network inference with hidden units
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q395718)