Training thinner and deeper neural networks: jumpstart regularization

From MaRDI portal
Publication:2170213

DOI10.1007/978-3-031-08011-1_23zbMATH Open1502.68265arXiv2201.12795OpenAlexW4285164920MaRDI QIDQ2170213FDOQ2170213


Authors: Carles Riera, Camilo Rey, Thiago Serra, Eloi Puertas, Oriol Pujol Edit this on Wikidata


Publication date: 30 August 2022

Abstract: Neural networks are more expressive when they have multiple layers. In turn, conventional training methods are only successful if the depth does not lead to numerical issues such as exploding or vanishing gradients, which occur less frequently when the layers are sufficiently wide. However, increasing width to attain greater depth entails the use of heavier computational resources and leads to overparameterized models. These subsequent issues have been partially addressed by model compression methods such as quantization and pruning, some of which relying on normalization-based regularization of the loss function to make the effect of most parameters negligible. In this work, we propose instead to use regularization for preventing neurons from dying or becoming linear, a technique which we denote as jumpstart regularization. In comparison to conventional training, we obtain neural networks that are thinner, deeper, and - most importantly - more parameter-efficient.


Full work available at URL: https://arxiv.org/abs/2201.12795




Recommendations




Cites Work


Uses Software





This page was built for publication: Training thinner and deeper neural networks: jumpstart regularization

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2170213)