Numerical stability of algorithms at extreme scale and low precisions (Q6200205): Difference between revisions

From MaRDI portal
Added link to MaRDI item.
ReferenceBot (talk | contribs)
Changed an Item
 
Property / cites work
 
Property / cites work: LAPACK Users' Guide / rank
 
Normal rank
Property / cites work
 
Property / cites work: Mixed Precision Block Fused Multiply-Add: Error Analysis and Application to GPU Tensor Cores / rank
 
Normal rank
Property / cites work
 
Property / cites work: A Class of Fast and Accurate Summation Algorithms / rank
 
Normal rank
Property / cites work
 
Property / cites work: Accelerating the Solution of Linear Systems by Iterative Refinement in Three Precisions / rank
 
Normal rank
Property / cites work
 
Property / cites work: Reducing Floating Point Error in Dot Product Using the Superblock Family of Algorithms / rank
 
Normal rank
Property / cites work
 
Property / cites work: Stochastic Rounding and Its Probabilistic Backward Error Analysis / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4348513 / rank
 
Normal rank
Property / cites work
 
Property / cites work: A set of level 3 basic linear algebra subprograms / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4917542 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Anatomy of high-performance matrix multiplication / rank
 
Normal rank
Property / cites work
 
Property / cites work: Mixed-precision iterative refinement using tensor cores on GPUs to accelerate solution of linear systems / rank
 
Normal rank
Property / cites work
 
Property / cites work: Random Matrices Generating Large Growth in LU Factorization with Pivoting / rank
 
Normal rank
Property / cites work
 
Property / cites work: Accuracy and Stability of Numerical Algorithms / rank
 
Normal rank
Property / cites work
 
Property / cites work: A New Approach to Probabilistic Rounding Error Analysis / rank
 
Normal rank
Property / cites work
 
Property / cites work: Sharper Probabilistic Backward Error Analysis for Basic Linear Algebra Kernels with Random Data / rank
 
Normal rank
Property / cites work
 
Property / cites work: Squeezing a Matrix into Half Precision, with an Application to Solving Linear Systems / rank
 
Normal rank
Property / cites work
 
Property / cites work: Probabilistic Error Analysis for Inner Products / rank
 
Normal rank
Property / cites work
 
Property / cites work: Handbook of Floating-Point Arithmetic / rank
 
Normal rank
Property / cites work
 
Property / cites work: Stochastic Perturbation Theory / rank
 
Normal rank
Property / cites work
 
Property / cites work: Error Analysis of Direct Methods of Matrix Inversion / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4342463 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Modern Error Analysis / rank
 
Normal rank

Latest revision as of 14:01, 28 August 2024

scientific article; zbMATH DE number 7822586
Language Label Description Also known as
English
Numerical stability of algorithms at extreme scale and low precisions
scientific article; zbMATH DE number 7822586

    Statements

    Numerical stability of algorithms at extreme scale and low precisions (English)
    0 references
    0 references
    22 March 2024
    0 references
    Summary: The largest dense linear systems that are being solved today are of order \(n=10^7\). Single-precision arithmetic, which has a unit roundoff \(u \approx 10^{-8}\), is widely used in scientific computing, and half-precision arithmetic, with \(u \approx 10^{-4}\), is increasingly being exploited as it becomes more readily available in hardware. Standard rounding error bounds for numerical linear algebra algorithms are proportional to \(p(n)u\), with \(p\) growing at least linearly with \(n\). Therefore we are at the stage where these rounding error bounds are not able to guarantee any accuracy or stability in the computed results for some extreme-scale or low-accuracy computations. We explain how rounding error bounds with much smaller constants can be obtained. Blocked algorithms, which break the data into blocks of size \(b\), lead to a reduction in the error constants by a factor \(b\) or more. Two architectural features also reduce the error constants: extended precision registers and fused multiply-add operations, either at the scalar level or in mixed precision block form. We also discuss a new probabilistic approach to rounding error analysis that provides error constants that are the square roots of those of the worst-case bounds. Combining these different considerations provides new understanding of the numerical stability of extreme scale and low precision computations in numerical linear algebra. For the entire collection see [Zbl 07816361].
    0 references
    floating-point arithmetic
    0 references
    backward error analysis
    0 references
    numerical stability
    0 references
    probabilistic rounding error analysis
    0 references
    blocked algorithm
    0 references
    fused multiply-add
    0 references
    mixed precision computation
    0 references

    Identifiers