Reducing floating point error in dot product using the superblock family of algorithms
DOI10.1137/070679946zbMATH Open1189.65076OpenAlexW2078152929MaRDI QIDQ110775FDOQ110775
Authors: Anthony M. Castaldo, R. Clint Whaley, Anthony T. Chronopoulos, Anthony M. Castaldo, R. Clint Whaley, Anthony T. Chronopoulos
Publication date: January 2009
Published in: SIAM Journal on Scientific Computing (Search for Journal in Brave)
Full work available at URL: https://semanticscholar.org/paper/2483c58592a7651e8f82db540cf19e9ce219f9f2
Recommendations
- A fast dot-product algorithm with minimal rounding errors
- Accurate Sum and Dot Product
- Ultimately fast accurate summation
- Compensated summation and dot product algorithms for floating-point vectors on parallel architectures: error bounds, implementation and application in the Krylov subspace methods
- Error estimation of floating-point summation and dot product
algorithmserror boundsnumerical examplesATLASBLAScomputational performancedot producterror analysiserror behaviorerror-reducing algorithmsinner productmemory usagesuperblock
Cited In (9)
- Floating-point arithmetic
- Sharper Probabilistic Backward Error Analysis for Basic Linear Algebra Kernels with Random Data
- Minimizing synchronizations in sparse iterative solvers for distributed supercomputers
- A Class of Fast and Accurate Summation Algorithms
- Verified numerical computations for large-scale linear systems.
- Numerical stability of algorithms at extreme scale and low precisions
- PreciseSums
- A fast dot-product algorithm with minimal rounding errors
- A New Approach to Probabilistic Rounding Error Analysis
Uses Software
This page was built for publication: Reducing floating point error in dot product using the superblock family of algorithms
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q110775)