The role of mutual information in variational classifiers
From MaRDI portal
Publication:6134364
Abstract: Overfitting data is a well-known phenomenon related with the generation of a model that mimics too closely (or exactly) a particular instance of data, and may therefore fail to predict future observations reliably. In practice, this behaviour is controlled by various--sometimes heuristics--regularization techniques, which are motivated by developing upper bounds to the generalization error. In this work, we study the generalization error of classifiers relying on stochastic encodings trained on the cross-entropy loss, which is often used in deep learning for classification problems. We derive bounds to the generalization error showing that there exists a regime where the generalization error is bounded by the mutual information between input features and the corresponding representations in the latent space, which are randomly generated according to the encoding distribution. Our bounds provide an information-theoretic understanding of generalization in the so-called class of variational classifiers, which are regularized by a Kullback-Leibler (KL) divergence term. These results give theoretical grounds for the highly popular KL term in variational inference methods that was already recognized to act effectively as a regularization penalty. We further observe connections with well studied notions such as Variational Autoencoders, Information Dropout, Information Bottleneck and Boltzmann Machines. Finally, we perform numerical experiments on MNIST and CIFAR datasets and show that mutual information is indeed highly representative of the behaviour of the generalization error.
Recommendations
- Entropy and mutual information in models of deep neural networks*
- Emergence of invariance and disentanglement in deep representations
- On inequalities between mutual information and variation
- Learning and Generalization with the Information Bottleneck
- Learning and generalization with the information bottleneck
Cites work
- scientific article; zbMATH DE number 6378127 (Why is no real title available?)
- scientific article; zbMATH DE number 1753143 (Why is no real title available?)
- scientific article; zbMATH DE number 893887 (Why is no real title available?)
- scientific article; zbMATH DE number 3202900 (Why is no real title available?)
- scientific article; zbMATH DE number 3082712 (Why is no real title available?)
- 10.1162/153244302760200704
- A Fast Learning Algorithm for Deep Belief Nets
- A learning criterion for stochastic rules
- Asymptotic evaluation of certain markov process expectations for large time. IV
- Deep learning
- Elements of Information Theory
- Emergence of invariance and disentanglement in deep representations
- Foundations of machine learning
- How Much Does Your Data Exploration Overfit? Controlling Bias via Information Usage
- Joint maximization of accuracy and information for learning the structure of a Bayesian network classifier
- Learners that use little information
- Learning and generalization with the information bottleneck
- On the information bottleneck theory of deep learning
- PAC-Bayesian compression bounds on the prediction error of learning algorithms for classification
- Robust Large Margin Deep Neural Networks
- Robustness and generalization
- Stacked denoising autoencoders: learning useful representations in a deep network with a local denoising criterion
- The minimax learning rates of normal and Ising undirected graphical models
- Training Products of Experts by Minimizing Contrastive Divergence
This page was built for publication: The role of mutual information in variational classifiers
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6134364)