Training thinner and deeper neural networks: jumpstart regularization
From MaRDI portal
Publication:2170213
DOI10.1007/978-3-031-08011-1_23zbMATH Open1502.68265arXiv2201.12795OpenAlexW4285164920MaRDI QIDQ2170213FDOQ2170213
Authors: Carles Riera, Camilo Rey, Thiago Serra, Eloi Puertas, Oriol Pujol
Publication date: 30 August 2022
Abstract: Neural networks are more expressive when they have multiple layers. In turn, conventional training methods are only successful if the depth does not lead to numerical issues such as exploding or vanishing gradients, which occur less frequently when the layers are sufficiently wide. However, increasing width to attain greater depth entails the use of heavier computational resources and leads to overparameterized models. These subsequent issues have been partially addressed by model compression methods such as quantization and pruning, some of which relying on normalization-based regularization of the loss function to make the effect of most parameters negligible. In this work, we propose instead to use regularization for preventing neurons from dying or becoming linear, a technique which we denote as jumpstart regularization. In comparison to conventional training, we obtain neural networks that are thinner, deeper, and - most importantly - more parameter-efficient.
Full work available at URL: https://arxiv.org/abs/2201.12795
Recommendations
- Efficient and sparse neural networks by pruning weights in a multiobjective learning approach
- Make \(\ell_1\) regularization effective in training sparse CNN
- Transformed \(\ell_1\) regularization for learning sparse deep neural networks
- A new initialization method based on normed statistical spaces in deep networks
- Fast convex pruning of deep neural networks
Cites Work
- Scikit-learn: machine learning in Python
- Learning representations by back-propagating errors
- A Stochastic Approximation Method
- Deep learning
- Understanding machine learning. From theory to algorithms
- Title not available (Why is that?)
- Approximation by superpositions of a sigmoidal function
- Lossless compression of deep neural networks
- Approximation spaces of deep neural networks
- Dying ReLU and initialization: theory and numerical examples
Uses Software
This page was built for publication: Training thinner and deeper neural networks: jumpstart regularization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2170213)