BinaryRelax: A Relaxation Approach for Training Deep Neural Networks with Quantized Weights
From MaRDI portal
Publication:5230408
DOI10.1137/18M1166134zbMath1419.90072arXiv1801.06313OpenAlexW2962958489WikidataQ129141394 ScholiaQ129141394MaRDI QIDQ5230408
Jiancheng Lyu, Shuai Zhang, Yingyong Qi, Penghang Yin, Jack X. Xin, Stanley J. Osher
Publication date: 22 August 2019
Published in: SIAM Journal on Imaging Sciences (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1801.06313
Applications of mathematical programming (90C90) Integer programming (90C10) Nonconvex programming, global optimization (90C26)
Related Items (4)
A homotopy training algorithm for fully connected neural networks ⋮ Learning quantized neural nets by coarse gradient method for nonlinear classification ⋮ Blended coarse gradient descent for full quantization of deep neural networks ⋮ Binary quantized network training with sharpness-aware minimization
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Deep relaxation: partial differential equations for optimizing deep neural networks
- Atomic Decomposition by Basis Pursuit
- Stochastic Approximations and Perturbations in Forward-Backward Splitting for Monotone Operators
- Analysis and Generalizations of the Linearized Bregman Method
- The Split Bregman Method for L1-Regularized Problems
- Robust uncertainty principles: exact signal reconstruction from highly incomplete frequency information
- Variational Analysis
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- Least squares quantization in PCM
- De-noising by soft-thresholding
- Minimization of $\ell_{1-2}$ for Compressed Sensing
- Bregman Iterative Algorithms for $\ell_1$-Minimization with Applications to Compressed Sensing
- Proximité et dualité dans un espace hilbertien
This page was built for publication: BinaryRelax: A Relaxation Approach for Training Deep Neural Networks with Quantized Weights