Binary quantized network training with sharpness-aware minimization
From MaRDI portal
Publication:2111176
DOI10.1007/S10915-022-02064-7OpenAlexW4311036102MaRDI QIDQ2111176FDOQ2111176
Authors: Ren Liu, Fengmiao Bian, Xiaoqun Zhang
Publication date: 23 December 2022
Published in: Journal of Scientific Computing (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10915-022-02064-7
Recommendations
- Post-training Quantization for Neural Networks with Provable Guarantees
- Stochastic Markov gradient descent and training low-bit neural networks
- BinaryRelax: a relaxation approach for training deep neural networks with quantized weights
- Learning quantized neural nets by coarse gradient method for nonlinear classification
- Blended coarse gradient descent for full quantization of deep neural networks
Numerical mathematical programming methods (65K05) Artificial neural networks and deep learning (68T07)
Cites Work
Cited In (4)
Uses Software
This page was built for publication: Binary quantized network training with sharpness-aware minimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2111176)