Learning quantized neural nets by coarse gradient method for nonlinear classification
From MaRDI portal
Publication:2050846
DOI10.1007/S40687-021-00281-4zbMATH Open1476.90263arXiv2011.11256OpenAlexW3189554725MaRDI QIDQ2050846FDOQ2050846
Authors: Ziang Long, Penghang Yin, Jack Xin
Publication date: 1 September 2021
Published in: Research in the Mathematical Sciences (Search for Journal in Brave)
Abstract: Quantized or low-bit neural networks are attractive due to their inference efficiency. However, training deep neural networks with quantized activations involves minimizing a discontinuous and piecewise constant loss function. Such a loss function has zero gradients almost everywhere (a.e.), which makes the conventional gradient-based algorithms inapplicable. To this end, we study a novel class of emph{biased} first-order oracle, termed coarse gradient, for overcoming the vanished gradient issue. A coarse gradient is generated by replacing the a.e. zero derivatives of quantized (i.e., stair-case) ReLU activation composited in the chain rule with some heuristic proxy derivative called straight-through estimator (STE). Although having been widely used in training quantized networks empirically, fundamental questions like when and why the ad-hoc STE trick works, still lacks theoretical understanding. In this paper, we propose a class of STEs with certain monotonicity, and consider their applications to the training of a two-linear-layer network with quantized activation functions for non-linear multi-category classification. We establish performance guarantees for the proposed STEs by showing that the corresponding coarse gradient methods converge to the global minimum, which leads to a perfect classification. Lastly, we present experimental results on synthetic data as well as MNIST dataset to verify our theoretical findings and demonstrate the effectiveness of our proposed STEs.
Full work available at URL: https://arxiv.org/abs/2011.11256
Recommendations
- Blended coarse gradient descent for full quantization of deep neural networks
- Stochastic Markov gradient descent and training low-bit neural networks
- scientific article; zbMATH DE number 6982943
- BinaryRelax: a relaxation approach for training deep neural networks with quantized weights
- Stochastic quantization for learning accurate low-bit deep neural networks
Cites Work
- Large margin classification using the perceptron algorithm
- Title not available (Why is that?)
- Title not available (Why is that?)
- Linear feature transform and enhancement of classification on deep neural network
- ReLU deep neural networks and linear finite elements
- Blended coarse gradient descent for full quantization of deep neural networks
- BinaryRelax: a relaxation approach for training deep neural networks with quantized weights
Cited In (11)
- Self-organization of the batch Kohonen network under quantization effects
- Recurrence of optimum for training weight and activation quantized networks
- Title not available (Why is that?)
- Learning Multiple Quantiles With Neural Networks
- How many bits does it take to quantize your neural network?
- Neural Quadratic Discriminant Analysis: Nonlinear Decoding with V1-Like Computation
- BinaryRelax: a relaxation approach for training deep neural networks with quantized weights
- Title not available (Why is that?)
- Blended coarse gradient descent for full quantization of deep neural networks
- Stochastic Markov gradient descent and training low-bit neural networks
- Stochastic quantization for learning accurate low-bit deep neural networks
Uses Software
This page was built for publication: Learning quantized neural nets by coarse gradient method for nonlinear classification
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2050846)