Predict-and-Recompute Conjugate Gradient Variants

From MaRDI portal
Publication:5132005

DOI10.1137/19M1276856zbMATH Open1452.65057arXiv1905.01549MaRDI QIDQ5132005FDOQ5132005


Authors: Tyler Chen, Erin Carson Edit this on Wikidata


Publication date: 9 November 2020

Published in: SIAM Journal on Scientific Computing (Search for Journal in Brave)

Abstract: The standard implementation of the conjugate gradient algorithm suffers from communication bottlenecks on parallel architectures, due primarily to the two global reductions required every iteration. In this paper, we study conjugate gradient variants which decrease the runtime per iteration by overlapping global synchronizations, and in the case of pipelined variants, matrix-vector products. Through the use of a predict-and-recompute scheme, whereby recursively-updated quantities are first used as a predictor for their true values and then recomputed exactly at a later point in the iteration, these variants are observed to have convergence behavior nearly as good as the standard conjugate gradient implementation on a variety of test problems. We provide a rounding error analysis which provides insight into this observation. It is also verified experimentally that the variants studied do indeed reduce the runtime per iteration in practice and that they scale similarly to previously-studied communication-hiding variants. Finally, because these variants achieve good convergence without the use of any additional input parameters, they have the potential to be used in place of the standard conjugate gradient implementation in a range of applications.


Full work available at URL: https://arxiv.org/abs/1905.01549




Recommendations




Cites Work


Cited In (1)

Uses Software





This page was built for publication: Predict-and-Recompute Conjugate Gradient Variants

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5132005)