Approximation of functions of few variables in high dimensions (Q623354): Difference between revisions

From MaRDI portal
Set OpenAlex properties.
ReferenceBot (talk | contribs)
Changed an Item
 
Property / cites work
 
Property / cites work: Laplacian Eigenmaps for Dimensionality Reduction and Data Representation / rank
 
Normal rank
Property / cites work
 
Property / cites work: Stable signal recovery from incomplete and inaccurate measurements / rank
 
Normal rank
Property / cites work
 
Property / cites work: Compressed sensing and best 𝑘-term approximation / rank
 
Normal rank
Property / cites work
 
Property / cites work: Diffusion wavelets / rank
 
Normal rank
Property / cites work
 
Property / cites work: Compressed sensing / rank
 
Normal rank
Property / cites work
 
Property / cites work: On the Size of Separating Systems and Families of Perfect Hash Functions / rank
 
Normal rank
Property / cites work
 
Property / cites work: Best subset selection, persistence in high-dimensional statistical learning and optimization under \(l_1\) constraint / rank
 
Normal rank
Property / cites work
 
Property / cites work: New bounds for perfect hashing via information theory / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4878632 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Learning juntas / rank
 
Normal rank
Property / cites work
 
Property / cites work: Tractability of multivariate problems. Volume I: Linear information / rank
 
Normal rank
Property / cites work
 
Property / cites work: Convergence rates for sparse chaos approximations of elliptic problems with stochastic coefficients / rank
 
Normal rank

Latest revision as of 18:07, 3 July 2024

scientific article
Language Label Description Also known as
English
Approximation of functions of few variables in high dimensions
scientific article

    Statements

    Approximation of functions of few variables in high dimensions (English)
    0 references
    0 references
    0 references
    14 February 2011
    0 references
    Many high-dimensional computational problems depend in fact on much fewer variables than originally assumed. It is important for practical computations to reduce this high (say, \(N\)) dimensional problem to fewer variables (say, \(\ell\)), and this process is known as dimension reduction. The reason for this is the so-called curse of dimensionality which states that the higher the number of variables is, the more complicated (far more complicated, that is) becomes the problem. There are several approaches to solving this question, and one of the issues is the question whether we know which ones the smaller number of coordinates are or whether we do not know that. The former case is settled to some extend, in the sense that there are error estimates for the reduced problem as compared to the original problem which depend on the number of function values we have and on the \(Lip1\)-norm of the approximand. The latter alternative is addressed in this paper where a certain number of adaptively chosen function values have to be provided to get the same order of estimates as in the former case. This number is, of course, larger, typically by a \(O(\log N)\) factor (times a factor that depends on \(\ell\)). Algorithms are explicitly provided for the mentioned adaptive choice of function evaluations.
    0 references
    approximation of functions
    0 references
    high dimension
    0 references
    significant variables
    0 references
    sensitivity analysis
    0 references

    Identifiers