Analysis of multivariate Gegenbauer approximation in the hypercube (Q2190668): Difference between revisions
From MaRDI portal
Set OpenAlex properties. |
Changed an Item |
||
Property / arXiv ID | |||
Property / arXiv ID: 1811.04587 / rank | |||
Normal rank |
Revision as of 02:01, 19 April 2024
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | Analysis of multivariate Gegenbauer approximation in the hypercube |
scientific article |
Statements
Analysis of multivariate Gegenbauer approximation in the hypercube (English)
0 references
21 June 2020
0 references
Orthogonal Polynomials are an important tool in numerical analysis. They are used for, e.g., quadrature rules and expansions of smooth (analytic or otherwise) functions and any kind of other approximations (so they are, in a way, essentially a part of approximation theory within numerical analysis). Mostly, they are used in one dimension, and all classical ortogonal polynomials (Gegenbauer, Legendre, Chebyshev, Laguerre, \dots) are available in one variable. But in order to use them for instance when trying to solve partial differential equations, multivariate versions of them are desirable. To this end, tensor-product methods can be used. In this article, multivariable variants of Gegenbauer (orthogonal) polynomials are investigated, and a particular emphasis is given to good upper estimates on the asymptotics of expansions of analytic (or otherwise smooth/differentiable) functions. Moreover, error bounds on such approximations by truncated expansions are provided (truncated according to some rule on the indices, usually an upper bound on its \(q\)-norm). These are used in order to examine approximation theory methods in order to solve parameter-dependent (i.e., their coefficient depend on a continuous parameter) elliptic PDEs. At first, Gegenbauer polynomials are introduced which are defined by a degree (usually \(n\)) with a weight function depending on a quantity \(\lambda>-\frac12\), and a domain (unit interval \([-1,1]\), later-on extended to a cube \([-1,1]^d\)), normalised by certain constants depending on \(\Gamma\)-functions. The polynomials' parameter is restricted to positive or at a minimum non-negative \(\lambda\) in the theorems that follow. So-called Bernstein ellipses \(\mathcal E\) are defined with foci \(\pm1\) and axes defined by a parameter \(\rho>1\). They and their lengths of circumference \(L(\cdot)\) will play an essential part in the estimates on the asymptotic behaviour of the expansion's coefficients, by defining regions of analyticity of approximands and certain constants in the upper bounds. The generalisations of the Gegenbauer polynomials from \(d=1\) to \(d>1\) will be by tensor products (``tensorisation''). The mentioned Bernstein ellipses are ``tensorised'' in the same fashion. The \(\rho\) is generalised to a \textbf{vector} \(\mathbf{\rho}\). A first result on the decay of the expansion's coefficients is offered in Theorem 3.1 where the upper bound depends essentially on \(\rho\), \(L(\mathcal E)\), a maximum of the approximand on \(\mathcal E\) and certain constants that come from the Gegenbauer polynomials and their parameter \(\lambda\). Two corollaries with respect to first and second Chebyshev polynomials follow. Different regions replacing the Bernstein ellipses with a positive parameter \(h\) and a non-negative \(\epsilon\) (those regions contain Bernstein ellipses) lead to more asymptotic upper bounds of the expansions' coefficients. The (negative powers of) \(\rho\) on the right-hand side now depend on \(h\). Special cases for \(\lambda=1,0\) are offered too. The approximations by expansion of analytic functions are measured by the remaining part of the infinite expansion after the latter is cut-off after a certain index. To see where this happens, sets of (indices of, to be precise) coefficients of \(q\)-norm up to a certain number \(N\) are introduced. Using them, the authors get estimates on the approximation error of an analytic approximand whose expansion with coefficients defined by a projection are cut-off in the said way, and the bounds are by powers of \(\rho^{-N/\gamma}\), where \(N\) is as above and \(\gamma\) depends on the dimension \(d\) and on \(q\). \(\gamma\) becomes larger with larger \(d\); this is the curse of dimensionality. For approximands of lower regularity and \(\lambda>0\), there are estimates on the coefficients (upper bounds) that depend on Sobolev-type norms of the approximand's derivatives, on \(\lambda\) and certain normalisation factors of the Gegenbauer polynomials. An error estimate for the cut-off expansion of functions from Sobolev spaces is given that depends on the \(N\) above with \(q=\infty\) and on the function's differentiability. As an application, the numerical solution of elliptic PDEs whose coefficients depend on a parameter is considered. For this purpose, again, upper bounds on the coefficients of the expansion of the expected solution as an \textit{Ansatz} are given, now measured in the norm of a general Banach space. The special case of Legendre polynomials is used. So we get, in general, an expansion of the solution to-be-computed in polynomials (cut-off after a finite number of indices) whose coefficients depend on the aforementioned parameter. The error between the actual solution and its numerical approximation can be bounded by sums of the norms (in the said Banach space) of the coefficient which depend, as mentioned, on our parameter. (If normalised properly, the orthogonal polynomials only contribute a factor of 1 each time). So the quality of the solution can be measured by the \textit{largest} coefficient, which is why these \textit{upper} bounds on the expansions' coefficients from the paper are so important (the largest coefficient making the biggest contribution to the error and therefore to be kept under control most decidedly). How the errors are estimated on the basis of our knowledge of the size of the expansions' coefficients is demonstrated in the last result of the paper.
0 references
hypercube
0 references
polyellipse
0 references
multivariate Gegenbauer approximation
0 references
\(\ell^q\) ball index set
0 references
error bound
0 references