On stochastic roundoff errors in gradient descent with low-precision computation

From MaRDI portal
Publication:6150643

DOI10.1007/S10957-023-02345-7arXiv2202.12276MaRDI QIDQ6150643FDOQ6150643


Authors: Lu Xia, Stefano Massei, M. E. Hochstenbach, B. Koren Edit this on Wikidata


Publication date: 9 February 2024

Published in: Journal of Optimization Theory and Applications (Search for Journal in Brave)

Abstract: When implementing the gradient descent method in low precision, the employment of stochastic rounding schemes helps to prevent stagnation of convergence caused by the vanishing gradient effect. Unbiased stochastic rounding yields zero bias by preserving small updates with probabilities proportional to their relative magnitudes. This study provides a theoretical explanation for the stagnation of the gradient descent method in low-precision computation. Additionally, we propose two new stochastic rounding schemes that trade the zero bias property with a larger probability to preserve small gradients. Our methods yield a constant rounding bias that, on average, lies in a descent direction. For convex problems, we prove that the proposed rounding methods typically have a beneficial effect on the convergence rate of gradient descent. We validate our theoretical analysis by comparing the performances of various rounding schemes when optimizing a multinomial logistic regression model and when training a simple neural network with an 8-bit floating-point format.


Full work available at URL: https://arxiv.org/abs/2202.12276







Cites Work






This page was built for publication: On stochastic roundoff errors in gradient descent with low-precision computation

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6150643)