A shrinkage principle for heavy-tailed data: high-dimensional robust low-rank matrix recovery (Q820791): Difference between revisions

From MaRDI portal
Changed an Item
Set OpenAlex properties.
 
(3 intermediate revisions by 3 users not shown)
Property / MaRDI profile type
 
Property / MaRDI profile type: MaRDI publication profile / rank
 
Normal rank
Property / arXiv ID
 
Property / arXiv ID: 1603.08315 / rank
 
Normal rank
Property / cites work
 
Property / cites work: On the prediction loss of the Lasso in the partially labeled setting / rank
 
Normal rank
Property / cites work
 
Property / cites work: Covariance regularization by thresholding / rank
 
Normal rank
Property / cites work
 
Property / cites work: Simultaneous analysis of Lasso and Dantzig selector / rank
 
Normal rank
Property / cites work
 
Property / cites work: Empirical risk minimization for heavy-tailed losses / rank
 
Normal rank
Property / cites work
 
Property / cites work: Adaptive Thresholding for Sparse Covariance Matrix Estimation / rank
 
Normal rank
Property / cites work
 
Property / cites work: Sparse Representation of a Polytope and Recovery of Sparse Signals and Low-Rank Matrices / rank
 
Normal rank
Property / cites work
 
Property / cites work: ROP: matrix recovery via rank-one projections / rank
 
Normal rank
Property / cites work
 
Property / cites work: Optimal rates of convergence for sparse covariance matrix estimation / rank
 
Normal rank
Property / cites work
 
Property / cites work: The restricted isometry property and its implications for compressed sensing / rank
 
Normal rank
Property / cites work
 
Property / cites work: Tight Oracle Inequalities for Low-Rank Matrix Recovery From a Minimal Number of Noisy Random Measurements / rank
 
Normal rank
Property / cites work
 
Property / cites work: Exact matrix completion via convex optimization / rank
 
Normal rank
Property / cites work
 
Property / cites work: Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies? / rank
 
Normal rank
Property / cites work
 
Property / cites work: The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder). / rank
 
Normal rank
Property / cites work
 
Property / cites work: Challenging the empirical mean and empirical variance: a deviation study / rank
 
Normal rank
Property / cites work
 
Property / cites work: Atomic Decomposition by Basis Pursuit / rank
 
Normal rank
Property / cites work
 
Property / cites work: Inference and uncertainty quantification for noisy matrix completion / rank
 
Normal rank
Property / cites work
 
Property / cites work: Noisy Matrix Completion: Understanding Statistical Guarantees for Convex Relaxation via Nonconvex Optimization / rank
 
Normal rank
Property / cites work
 
Property / cites work: Robust Estimators in High-Dimensions Without the Computational Intractability / rank
 
Normal rank
Property / cites work
 
Property / cites work: Compressed sensing / rank
 
Normal rank
Property / cites work
 
Property / cites work: Accurate Prediction of Phase Transitions in Compressed Sensing via a Connection to Minimax Denoising / rank
 
Normal rank
Property / cites work
 
Property / cites work: Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties / rank
 
Normal rank
Property / cites work
 
Property / cites work: Estimation of High Dimensional Mean Regression in the Absence of Symmetry and Light Tail Assumptions / rank
 
Normal rank
Property / cites work
 
Property / cites work: Large Covariance Estimation by Thresholding Principal Orthogonal Complements / rank
 
Normal rank
Property / cites work
 
Property / cites work: Large covariance estimation through elliptical factor models / rank
 
Normal rank
Property / cites work
 
Property / cites work: Sure Independence Screening for Ultrahigh Dimensional Feature Space / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q2810786 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Tree-guided group lasso for multi-response regression with structured sparsity, with an application to eQTL mapping / rank
 
Normal rank
Property / cites work
 
Property / cites work: Concentration inequalities and moment bounds for sample covariance operators / rank
 
Normal rank
Property / cites work
 
Property / cites work: Nuclear-norm penalization and optimal rates for noisy low-rank matrix completion / rank
 
Normal rank
Property / cites work
 
Property / cites work: Sparsistency and rates of convergence in large covariance matrix estimation / rank
 
Normal rank
Property / cites work
 
Property / cites work: Statistical consistency and asymptotic normality for high-dimensional robust \(M\)-estimators / rank
 
Normal rank
Property / cites work
 
Property / cites work: High-dimensional robust precision matrix estimation: cellwise corruption under \(\epsilon \)-contamination / rank
 
Normal rank
Property / cites work
 
Property / cites work: High-dimensional graphs and variable selection with the Lasso / rank
 
Normal rank
Property / cites work
 
Property / cites work: Geometric median and robust estimation in Banach spaces / rank
 
Normal rank
Property / cites work
 
Property / cites work: Sub-Gaussian estimators of the mean of a random matrix with heavy-tailed entries / rank
 
Normal rank
Property / cites work
 
Property / cites work: Estimation of (near) low-rank matrices with noise and high-dimensional scaling / rank
 
Normal rank
Property / cites work
 
Property / cites work: Restricted strong convexity and weighted matrix completion: Optimal bounds with noise / rank
 
Normal rank
Property / cites work
 
Property / cites work: A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers / rank
 
Normal rank
Property / cites work
 
Property / cites work: Minimax Rates of Estimation for High-Dimensional Linear Regression Over $\ell_q$-Balls / rank
 
Normal rank
Property / cites work
 
Property / cites work: A Simpler Approach to Matrix Completion / rank
 
Normal rank
Property / cites work
 
Property / cites work: Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization / rank
 
Normal rank
Property / cites work
 
Property / cites work: Estimation of high-dimensional low-rank matrices / rank
 
Normal rank
Property / cites work
 
Property / cites work: Reconstruction From Anisotropic Random Measurements / rank
 
Normal rank
Property / cites work
 
Property / cites work: Adaptive Huber Regression / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4864293 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Nearly unbiased variable selection under minimax concave penalty / rank
 
Normal rank
Property / cites work
 
Property / cites work: One-step sparse estimates in nonconcave penalized likelihood models / rank
 
Normal rank
Property / OpenAlex ID
 
Property / OpenAlex ID: W3192637965 / rank
 
Normal rank

Latest revision as of 09:30, 30 July 2024

scientific article
Language Label Description Also known as
English
A shrinkage principle for heavy-tailed data: high-dimensional robust low-rank matrix recovery
scientific article

    Statements

    A shrinkage principle for heavy-tailed data: high-dimensional robust low-rank matrix recovery (English)
    0 references
    0 references
    0 references
    0 references
    28 September 2021
    0 references
    The popular assumption of sub-Gaussian or subexponential noises in the theoretical analysis of standard statistical procedures is contradictory and has adverse impact on the popularly used methods. Hence, simple and effective principles for robust statistical inference are needed for dealing with heavy tailed data. The paper under review addresses this problem. Its authors propose truncation of univariate data and,more generally, shrinkage of multivariate data to achieve the robustness. The authors' approach is illustrated through a general model, which embraces the linear model, matrix compressed sensing, matrix completion, and multitask regression as specific examples. The authors develop the generalized loss, the truncated and shrinkage sample covariance, and corresponding M-estimators. Then, the conditions required on the robust covariance inputs to ensure the desired statistical error rates of the M-estimator are presented. These results are thereafter applied to all the specific above-mentioned problems in order to explicitly derive the specific error rates. Next, the convergence properties of the shrinkage covariance estimator under the spectral norm are studied. It is found that under high dimensions the proposed robust covariance is minimax optimal up to a logarithmic factor, whereas traditional sample covariance is not. Finally, simulation analyses is presented, which demonstrate the advantage of proposed robust estimators over the standard ones.
    0 references
    heavy-tailed data
    0 references
    high-dimensional statistics
    0 references
    low-rank matrix recovery
    0 references
    robust statistics
    0 references
    shrinkage
    0 references
    trace regression
    0 references
    0 references
    0 references
    0 references

    Identifiers

    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references