Training thinner and deeper neural networks: jumpstart regularization
From MaRDI portal
Publication:2170213
Abstract: Neural networks are more expressive when they have multiple layers. In turn, conventional training methods are only successful if the depth does not lead to numerical issues such as exploding or vanishing gradients, which occur less frequently when the layers are sufficiently wide. However, increasing width to attain greater depth entails the use of heavier computational resources and leads to overparameterized models. These subsequent issues have been partially addressed by model compression methods such as quantization and pruning, some of which relying on normalization-based regularization of the loss function to make the effect of most parameters negligible. In this work, we propose instead to use regularization for preventing neurons from dying or becoming linear, a technique which we denote as jumpstart regularization. In comparison to conventional training, we obtain neural networks that are thinner, deeper, and - most importantly - more parameter-efficient.
Recommendations
- Efficient and sparse neural networks by pruning weights in a multiobjective learning approach
- Make _1 regularization effective in training sparse CNN
- Transformed \(\ell_1\) regularization for learning sparse deep neural networks
- A new initialization method based on normed statistical spaces in deep networks
- Fast convex pruning of deep neural networks
Cites work
- scientific article; zbMATH DE number 3314813 (Why is no real title available?)
- A Stochastic Approximation Method
- Approximation by superpositions of a sigmoidal function
- Approximation spaces of deep neural networks
- Deep learning
- Dying ReLU and initialization: theory and numerical examples
- Learning representations by back-propagating errors
- Lossless compression of deep neural networks
- Scikit-learn: machine learning in Python
- Understanding machine learning. From theory to algorithms
Describes a project that uses
Uses Software
This page was built for publication: Training thinner and deeper neural networks: jumpstart regularization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2170213)