Mathematical Research Data Initiative
Main page
Recent changes
Random page
SPARQL
MaRDI@GitHub
New item
In other projects
MaRDI portal item
Discussion
View source
View history
English
Log in

Binary quantized network training with sharpness-aware minimization

From MaRDI portal
Publication:2111176
Jump to:navigation, search

DOI10.1007/S10915-022-02064-7OpenAlexW4311036102MaRDI QIDQ2111176FDOQ2111176

Fengmiao Bian, Xiaoqun Zhang, Ren Liu

Publication date: 23 December 2022

Published in: Journal of Scientific Computing (Search for Journal in Brave)

Full work available at URL: https://doi.org/10.1007/s10915-022-02064-7



zbMATH Keywords

binary quantized networksharpness-aware minimization


Mathematics Subject Classification ID

Numerical mathematical programming methods (65K05) Artificial neural networks and deep learning (68T07)


Cites Work

  • Title not available (Why is that?)
  • Convergence of stochastic proximal gradient algorithm
  • BinaryRelax: A Relaxation Approach for Training Deep Neural Networks with Quantized Weights
  • Multilayer feedforward neural networks with single powers-of-two weights


Cited In (3)

  • Title not available (Why is that?)
  • Training neural networks from an ergodic perspective
  • Stochastic quantization for learning accurate low-bit deep neural networks

Uses Software

  • CIFAR
  • ImageNet
  • AlexNet
  • GPipe
  • EfficientNet
  • XNOR-Net
  • Faster R-CNN






This page was built for publication: Binary quantized network training with sharpness-aware minimization

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2111176)

Retrieved from "https://portal.mardi4nfdi.de/w/index.php?title=Publication:2111176&oldid=14604022"
Tools
What links here
Related changes
Printable version
Permanent link
Page information
This page was last edited on 1 February 2024, at 22:13. Warning: Page may not contain recent updates.
Privacy policy
About MaRDI portal
Disclaimers
Imprint
Powered by MediaWiki