Precision-aware deterministic and probabilistic error bounds for floating point summation
From MaRDI portal
Publication:6093390
DOI10.1007/S00211-023-01370-YzbMATH Open1527.65030arXiv2203.15928OpenAlexW4386283552MaRDI QIDQ6093390FDOQ6093390
Authors: Eric Hallman, Ilse C. F. Ipsen
Publication date: 6 October 2023
Published in: Numerische Mathematik (Search for Journal in Brave)
Abstract: We analyze the forward error in the floating point summation of real numbers, for computations in low precision or extreme-scale problem dimensions that push the limits of the precision. We present a systematic recurrence for a martingale on a computational tree, which leads to explicit and interpretable bounds without asymptotic big-O terms. Two probability parameters strengthen the precision-awareness of our bounds: one parameter controls the first order terms in the summation error, while the second one is designed for controlling higher order terms in low precision or extreme-scale problem dimensions. Our systematic approach yields new deterministic and probabilistic error bounds for three classes of mono-precision algorithms: general summation, shifted general summation, and compensated (sequential) summation. Extension of our systematic error analysis to mixed-precision summation algorithms that allow any number of precisions yields the first probabilistic bounds for the mixed-precision FABsum algorithm. Numerical experiments illustrate that the probabilistic bounds are accurate, and that among the three classes of mono-precision algorithms, compensated summation is generally the most accurate. As for mixed precision algorithms, our recommendation is to minimize the magnitude of intermediate partial sums relative to the precision in which they are computed.
Full work available at URL: https://arxiv.org/abs/2203.15928
Recommendations
Cites Work
- Title not available (Why is that?)
- Accuracy and Stability of Numerical Algorithms
- Concentration Inequalities and Martingale Inequalities: A Survey
- Improved error bounds for inner products in floating-point arithmetic
- Probability and Computing
- Error estimation of floating-point summation and dot product
- Accurate and Efficient Floating Point Summation
- Rigorous roundoff error analysis of probabilistic floating-point computations
- A New Approach to Probabilistic Rounding Error Analysis
- Probabilistic Error Analysis for Inner Products
- Mixed precision algorithms in numerical linear algebra
- On relative errors of floating-point operations: optimal bounds and applications
- Sharp estimates for perturbation errors in summations
- Stochastic rounding and its probabilistic backward error analysis
- A Class of Fast and Accurate Summation Algorithms
- Faster stochastic trace estimation with a Chebyshev product identity
- Simulating Low Precision Floating-Point Arithmetic
- Sharper probabilistic backward error analysis for basic linear algebra kernels with random data
Cited In (9)
- Title not available (Why is that?)
- Fast and accurate floating point summation with application to computational geometry
- A priori worst case error bounds for floating-point computations
- Bounds on nonlinear errors for variance computation with stochastic rounding
- Probabilistic analysis of floating-point addition
- A Distillation Algorithm for Floating-Point Summation
- Algorithm 908
- Linear-Time Approximation Algorithms for Computing Numerical Summation with Provably Small Errors
- Introducing SummerTime: a package for high-precision computation of sums appearing in DRA method
This page was built for publication: Precision-aware deterministic and probabilistic error bounds for floating point summation
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6093390)