Deep neural networks with dependent weights: Gaussian Process mixture limit, heavy tails, sparsity and compressibility
From MaRDI portal
Publication:6399345
arXiv2205.08187MaRDI QIDQ6399345FDOQ6399345
Authors: Hoil Lee, Fadhel Ayed, Paul M. Jung, Juho Lee, Hongseok Yang, Francois Caron
Publication date: 17 May 2022
Abstract: This article studies the infinite-width limit of deep feedforward neural networks whose weights are dependent, and modelled via a mixture of Gaussian distributions. Each hidden node of the network is assigned a nonnegative random variable that controls the variance of the outgoing weights of that node. We make minimal assumptions on these per-node random variables: they are iid and their sum, in each layer, converges to some finite random variable in the infinite-width limit. Under this model, we show that each layer of the infinite-width neural network can be characterised by two simple quantities: a non-negative scalar parameter and a L'evy measure on the positive reals. If the scalar parameters are strictly positive and the L'evy measures are trivial at all hidden layers, then one recovers the classical Gaussian process (GP) limit, obtained with iid Gaussian weights. More interestingly, if the L'evy measure of at least one layer is non-trivial, we obtain a mixture of Gaussian processes (MoGP) in the large-width limit. The behaviour of the neural network in this regime is very different from the GP regime. One obtains correlated outputs, with non-Gaussian distributions, possibly with heavy tails. Additionally, we show that, in this regime, the weights are compressible, and feature learning is possible. Many sparsity-promoting neural network models can be recast as special cases of our approach, and we discuss their infinite-width limits; we also present an asymptotic analysis of the pruning error. We illustrate some of the benefits of the MoGP regime over the GP regime in terms of representation learning and compressibility on simulated, MNIST and Fashion MNIST datasets.
Has companion code repository: https://github.com/fadhela/mogp
Artificial neural networks and deep learning (68T07) Neural nets and related approaches to inference from stochastic processes (62M45) Limit theorems in probability theory (60F99)
This page was built for publication: Deep neural networks with dependent weights: Gaussian Process mixture limit, heavy tails, sparsity and compressibility
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6399345)