A shrinkage principle for heavy-tailed data: high-dimensional robust low-rank matrix recovery (Q820791)

From MaRDI portal
Revision as of 09:30, 30 July 2024 by Openalex240730090724 (talk | contribs) (Set OpenAlex properties.)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
scientific article
Language Label Description Also known as
English
A shrinkage principle for heavy-tailed data: high-dimensional robust low-rank matrix recovery
scientific article

    Statements

    A shrinkage principle for heavy-tailed data: high-dimensional robust low-rank matrix recovery (English)
    0 references
    0 references
    0 references
    0 references
    28 September 2021
    0 references
    The popular assumption of sub-Gaussian or subexponential noises in the theoretical analysis of standard statistical procedures is contradictory and has adverse impact on the popularly used methods. Hence, simple and effective principles for robust statistical inference are needed for dealing with heavy tailed data. The paper under review addresses this problem. Its authors propose truncation of univariate data and,more generally, shrinkage of multivariate data to achieve the robustness. The authors' approach is illustrated through a general model, which embraces the linear model, matrix compressed sensing, matrix completion, and multitask regression as specific examples. The authors develop the generalized loss, the truncated and shrinkage sample covariance, and corresponding M-estimators. Then, the conditions required on the robust covariance inputs to ensure the desired statistical error rates of the M-estimator are presented. These results are thereafter applied to all the specific above-mentioned problems in order to explicitly derive the specific error rates. Next, the convergence properties of the shrinkage covariance estimator under the spectral norm are studied. It is found that under high dimensions the proposed robust covariance is minimax optimal up to a logarithmic factor, whereas traditional sample covariance is not. Finally, simulation analyses is presented, which demonstrate the advantage of proposed robust estimators over the standard ones.
    0 references
    heavy-tailed data
    0 references
    high-dimensional statistics
    0 references
    low-rank matrix recovery
    0 references
    robust statistics
    0 references
    shrinkage
    0 references
    trace regression
    0 references
    0 references
    0 references
    0 references

    Identifiers

    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references