Stochastic Markov gradient descent and training low-bit neural networks
From MaRDI portal
Publication:2073135
DOI10.1007/S43670-021-00015-1OpenAlexW3206635749MaRDI QIDQ2073135FDOQ2073135
Jonathan Ashbrock, Alexander M. Powell
Publication date: 27 January 2022
Published in: Sampling Theory, Signal Processing, and Data Analysis (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2008.11117
neural networksquantizationstochastic gradient descentlow-memory trainingstochastic Markov gradient descent
Cites Work
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Approximation by superpositions of a sigmoidal function
- Stochastic gradient descent, weighted sampling, and the randomized Kaczmarz algorithm
- Universality of deep convolutional neural networks
- Deep distributed convolutional neural networks: Universality
- Lipschitz properties for deep convolutional networks
- Blended coarse gradient descent for full quantization of deep neural networks
- Deep Network Approximation Characterized by Number of Neurons
- Least-Squares Halftoning via Human Vision System and Markov Gradient Descent (LS-MGD): Algorithm and Analysis
Cited In (4)
Uses Software
This page was built for publication: Stochastic Markov gradient descent and training low-bit neural networks
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2073135)