Blended coarse gradient descent for full quantization of deep neural networks
DOI10.1007/s40687-018-0177-6zbMath1422.90066arXiv1808.05240OpenAlexW2964123455WikidataQ128641846 ScholiaQ128641846MaRDI QIDQ2319868
Jack X. Xin, Shuai Zhang, Yingyong Qi, Penghang Yin, Jiancheng Lyu, Stanley J. Osher
Publication date: 20 August 2019
Published in: Research in the Mathematical Sciences (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1808.05240
sufficient descent propertydeep neural networksblended coarse gradient descentweight/activation quantization
Programming involving graphs or networks (90C35) Applications of mathematical programming (90C90) Nonconvex programming, global optimization (90C26) Methods of reduced gradient type (90C52)
Related Items (5)
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Large margin classification using the perceptron algorithm
- Stochastic Approximations and Perturbations in Forward-Backward Splitting for Monotone Operators
- Relu Deep Neural Networks and Linear Finite Elements
- Global Convergence Properties of Conjugate Gradient Methods for Optimization
- Least squares quantization in PCM
- BinaryRelax: A Relaxation Approach for Training Deep Neural Networks with Quantized Weights
This page was built for publication: Blended coarse gradient descent for full quantization of deep neural networks