Optimal adaptive estimation of linear functionals under sparsity (Q1991697)

From MaRDI portal
Revision as of 03:17, 17 July 2024 by ReferenceBot (talk | contribs) (‎Changed an Item)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
scientific article
Language Label Description Also known as
English
Optimal adaptive estimation of linear functionals under sparsity
scientific article

    Statements

    Optimal adaptive estimation of linear functionals under sparsity (English)
    0 references
    0 references
    0 references
    0 references
    0 references
    30 October 2018
    0 references
    The authors consider the model \[ y_i = \theta_i +\sigma\xi_i,\quad i= 1,\dots, d, \] where \(\theta\in R_d\) is an unknown vector of parameters, \(\xi_j\) are iid standard normal rv's and \(\sigma>0\) is the noise level. The main task is to estimate from observations \(y_i\) linear functional \(L(\theta)=\sum_{i=1}^d\ \theta_i\). For \(s\in\{1,\dots,d\}\) let \(\theta_s=\big\{ \theta\in R_d,\ \|\theta\|_0\le s\big\}\), where \(\|\theta\|_0\) is the number of nonzero components of \(\theta\) and parameter \(s\) characterizes sparsity of \(\theta\). Adaptive estimator achieving a nonasymptotic rate of convergence that differs from the minimax rate at most by a logarithmic factor is suggested. It is shown that this optimal adaptive rate cannot be improved when \(s\) is unknown. Issue of simultaneous adaptation to both \(s\) and \(\sigma^2\) is also addressed and estimator achieving optimal adaptive rate when both \(s\) and \(\sigma^2\) are unknown is suggested and studied. The quality of estimators of \(L(\theta)\) is maximum squared risk.
    0 references
    nonasymptotic minimax estimation
    0 references
    adaptive estimation
    0 references
    linear functional
    0 references
    sparsity
    0 references
    unknown noise variance
    0 references

    Identifiers