A shrinkage principle for heavy-tailed data: high-dimensional robust low-rank matrix recovery
From MaRDI portal
Publication:820791
DOI10.1214/20-AOS1980zbMath1479.62034arXiv1603.08315OpenAlexW3192637965MaRDI QIDQ820791
Ziwei Zhu, Jianqing Fan, Wei-Chen Wang
Publication date: 28 September 2021
Published in: The Annals of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1603.08315
robust statisticsshrinkagehigh-dimensional statisticslow-rank matrix recoverytrace regressionheavy-tailed data
Estimation in multivariate analysis (62H12) Ridge regression; shrinkage estimators (Lasso) (62J07) Asymptotic properties of nonparametric inference (62G20) Nonparametric robustness (62G35) Signal theory (characterization, reconstruction, filtering, etc.) (94A12)
Related Items
Robust covariance estimation for distributed principal component analysis, Differential network inference via the fused D-trace loss with cross variables, Robust Recommendation via Social Network Enhanced Matrix Completion, A framework of regularized low-rank matrix models for regression and classification, Adaptive robust large volatility matrix estimation based on high-frequency financial data, Large volatility matrix analysis using global and national factor models, Rate-optimal robust estimation of high-dimensional vector autoregressive models, Robust matrix estimations meet Frank-Wolfe algorithm, Robust inference for high‐dimensional single index models, Covariance Estimation for Matrix-valued Data, Mining the factor zoo: estimation of latent factor models with sufficient proxies, Robust high-dimensional tuning free multiple testing, Understanding Implicit Regularization in Over-Parameterized Single Index Model, High-dimensional robust approximated \(M\)-estimators for mean regression with asymmetric data, Robust parameter estimation of regression models under weakened moment assumptions, High dimensional generalized linear models for temporal dependent data
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Nearly unbiased variable selection under minimax concave penalty
- Geometric median and robust estimation in Banach spaces
- Concentration inequalities and moment bounds for sample covariance operators
- Estimation of high-dimensional low-rank matrices
- Estimation of (near) low-rank matrices with noise and high-dimensional scaling
- Nuclear-norm penalization and optimal rates for noisy low-rank matrix completion
- Tree-guided group lasso for multi-response regression with structured sparsity, with an application to eQTL mapping
- Optimal rates of convergence for sparse covariance matrix estimation
- Empirical risk minimization for heavy-tailed losses
- The restricted isometry property and its implications for compressed sensing
- One-step sparse estimates in nonconcave penalized likelihood models
- Covariance regularization by thresholding
- Sparsistency and rates of convergence in large covariance matrix estimation
- On the prediction loss of the Lasso in the partially labeled setting
- High-dimensional robust precision matrix estimation: cellwise corruption under \(\epsilon \)-contamination
- Challenging the empirical mean and empirical variance: a deviation study
- Sub-Gaussian estimators of the mean of a random matrix with heavy-tailed entries
- Statistical consistency and asymptotic normality for high-dimensional robust \(M\)-estimators
- ROP: matrix recovery via rank-one projections
- Simultaneous analysis of Lasso and Dantzig selector
- Large covariance estimation through elliptical factor models
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- High-dimensional graphs and variable selection with the Lasso
- Exact matrix completion via convex optimization
- Atomic Decomposition by Basis Pursuit
- Accurate Prediction of Phase Transitions in Compressed Sensing via a Connection to Minimax Denoising
- Reconstruction From Anisotropic Random Measurements
- Adaptive Thresholding for Sparse Covariance Matrix Estimation
- Adaptive Huber Regression
- Near-Optimal Signal Recovery From Random Projections: Universal Encoding Strategies?
- Guaranteed Minimum-Rank Solutions of Linear Matrix Equations via Nuclear Norm Minimization
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- Sure Independence Screening for Ultrahigh Dimensional Feature Space
- Robust Estimators in High-Dimensions Without the Computational Intractability
- Noisy Matrix Completion: Understanding Statistical Guarantees for Convex Relaxation via Nonconvex Optimization
- Inference and uncertainty quantification for noisy matrix completion
- Minimax Rates of Estimation for High-Dimensional Linear Regression Over $\ell_q$-Balls
- Tight Oracle Inequalities for Low-Rank Matrix Recovery From a Minimal Number of Noisy Random Measurements
- Sparse Representation of a Polytope and Recovery of Sparse Signals and Low-Rank Matrices
- Estimation of High Dimensional Mean Regression in the Absence of Symmetry and Light Tail Assumptions
- A Simpler Approach to Matrix Completion
- Restricted strong convexity and weighted matrix completion: Optimal bounds with noise
- Large Covariance Estimation by Thresholding Principal Orthogonal Complements
- Compressed sensing
- A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers