On the infinite-depth limit of finite-width neural networks

From MaRDI portal
Publication:6412693

arXiv2210.00688MaRDI QIDQ6412693FDOQ6412693


Authors: Soufiane Hayou Edit this on Wikidata


Publication date: 2 October 2022

Abstract: In this paper, we study the infinite-depth limit of finite-width residual neural networks with random Gaussian weights. With proper scaling, we show that by fixing the width and taking the depth to infinity, the pre-activations converge in distribution to a zero-drift diffusion process. Unlike the infinite-width limit where the pre-activation converge weakly to a Gaussian random variable, we show that the infinite-depth limit yields different distributions depending on the choice of the activation function. We document two cases where these distributions have closed-form (different) expressions. We further show an intriguing change of regime phenomenon of the post-activation norms when the width increases from 3 to 4. Lastly, we study the sequential limit infinite-depth-then-infinite-width and compare it with the more commonly studied infinite-width-then-infinite-depth limit.













This page was built for publication: On the infinite-depth limit of finite-width neural networks

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6412693)