Advancing neural network calibration: the role of gradient decay in large-margin Softmax optimization
From MaRDI portal
Publication:6587016
DOI10.1016/J.NEUNET.2024.106457zbMATH Open1545.68117MaRDI QIDQ6587016FDOQ6587016
Authors: Siyuan Zhang, Linbo Xie
Publication date: 13 August 2024
Published in: Neural Networks (Search for Journal in Brave)
Recommendations
- A study on relationship between prediction uncertainty and robustness to noisy data
- AdaLo: adaptive learning rate optimizer with loss for classification
- Why does large batch training result in poor generalization? A comprehensive explanation and a better strategy from the viewpoint of stochastic optimization
- Homotopy relaxation training algorithms for infinite-width two-layer ReLU neural networks
- Levenberg-Marquardt multi-classification using hinge loss function
Numerical optimization and variational techniques (65K10) Artificial neural networks and deep learning (68T07)
Cites Work
This page was built for publication: Advancing neural network calibration: the role of gradient decay in large-margin Softmax optimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6587016)