Parametric PDEs: sparse or low-rank approximations?

From MaRDI portal
Publication:4555985

DOI10.1093/IMANUM/DRX052zbMATH Open1477.65189arXiv1607.04444OpenAlexW2515829451MaRDI QIDQ4555985FDOQ4555985

Wolfgang Dahmen, Markus Bachmayr, Albert Cohen

Publication date: 23 November 2018

Published in: IMA Journal of Numerical Analysis (Search for Journal in Brave)

Abstract: We consider adaptive approximations of the parameter-to-solution map for elliptic operator equations depending on a large or infinite number of parameters, comparing approximation strategies of different degrees of nonlinearity: sparse polynomial expansions, general low-rank approximations separating spatial and parametric variables, and hierarchical tensor decompositions separating all variables. We describe corresponding adaptive algorithms based on a common generic template and show their near-optimality with respect to natural approximability assumptions for each type of approximation. A central ingredient in the resulting bounds for the total computational complexity are new operator compression results for the case of infinitely many parameters. We conclude with a comparison of the complexity estimates based on the actual approximability properties of classes of parametric model problems, which shows that the computational costs of optimized low-rank expansions can be significantly lower or higher than those of sparse polynomial expansions, depending on the particular type of parametric problem.


Full work available at URL: https://arxiv.org/abs/1607.04444






Cited In (23)






This page was built for publication: Parametric PDEs: sparse or low-rank approximations?

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4555985)