Advancing neural network calibration: the role of gradient decay in large-margin Softmax optimization
From MaRDI portal
Publication:6587016
Recommendations
- A study on relationship between prediction uncertainty and robustness to noisy data
- AdaLo: adaptive learning rate optimizer with loss for classification
- Why does large batch training result in poor generalization? A comprehensive explanation and a better strategy from the viewpoint of stochastic optimization
- Homotopy relaxation training algorithms for infinite-width two-layer ReLU neural networks
- Levenberg-Marquardt multi-classification using hinge loss function
Cites work
This page was built for publication: Advancing neural network calibration: the role of gradient decay in large-margin Softmax optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6587016)