Covariate assisted screening and estimation
From MaRDI portal
Publication:482879
Abstract: Consider a linear model , where and . The vector is unknown but is sparse in the sense that most of its coordinates are . The main interest is to separate its nonzero coordinates from the zero ones (i.e., variable selection). Motivated by examples in long-memory time series (Fan and Yao [Nonlinear Time Series: Nonparametric and Parametric Methods (2003) Springer]) and the change-point problem (Bhattacharya [In Change-Point Problems (South Hadley, MA, 1992) (1994) 28-56 IMS]), we are primarily interested in the case where the Gram matrix is nonsparse but sparsifiable by a finite order linear filter. We focus on the regime where signals are both rare and weak so that successful variable selection is very challenging but is still possible. We approach this problem by a new procedure called the covariate assisted screening and estimation (CASE). CASE first uses a linear filtering to reduce the original setting to a new regression model where the corresponding Gram (covariance) matrix is sparse. The new covariance matrix induces a sparse graph, which guides us to conduct multivariate screening without visiting all the submodels. By interacting with the signal sparsity, the graph enables us to decompose the original problem into many separated small-size subproblems (if only we know where they are!). Linear filtering also induces a so-called problem of information leakage, which can be overcome by the newly introduced patching technique. Together, these give rise to CASE, which is a two-stage screen and clean [Fan and Song Ann. Statist. 38 (2010) 3567-3604; Wasserman and Roeder Ann. Statist. 37 (2009) 2178-2201] procedure, where we first identify candidates of these submodels by patching and screening, and then re-examine each candidate to remove false positives.
Recommendations
- Optimality of Graphlet Screening in High Dimensional Variable Selection
- Variable screening for high dimensional time series
- Sure independence screening for ultrahigh dimensional feature space. With discussion and authors' reply
- Sparse covariance thresholding for high-dimensional variable selection
- Exponential screening and optimal rates of sparse estimation
Cites Work
- scientific article; zbMATH DE number 5957408 (Why is no real title available?)
- scientific article; zbMATH DE number 4169866 (Why is no real title available?)
- scientific article; zbMATH DE number 1220667 (Why is no real title available?)
- scientific article; zbMATH DE number 1487640 (Why is no real title available?)
- scientific article; zbMATH DE number 845714 (Why is no real title available?)
- A comparison of the Lasso and marginal regression
- Broadband log-periodogram regression of time series with long-range dependence
- Circular binary segmentation for the analysis of array-based DNA copy number data
- Covariate assisted screening and estimation
- Detecting simultaneous change points in multiple sequences
- High-dimensional graphs and variable selection with the Lasso
- High-dimensional variable selection
- Higher criticism thresholding: Optimal feature selection when useful features are rare and weak
- Multiple Change-Point Estimation With a Total Variation Penalty
- Near-ideal model selection by \(\ell _{1}\) minimization
- Nearly unbiased variable selection under minimax concave penalty
- Nonlinear time series. Nonparametric and parametric methods
- On the Correlation Matrix of the Discrete Fourier Transform and the Fast Solution of Large Toeplitz Systems for Long-Memory Time Series
- Optimality of Graphlet Screening in High Dimensional Variable Selection
- Scaled sparse linear regression
- Sparse inverse covariance estimation with the graphical lasso
- Spatial smoothing and hot spot detection for CGH data using the fused lasso
- Strong oracle optimality of folded concave penalized estimation
- Sure independence screening in generalized linear models with NP-dimensionality
- The Adaptive Lasso and Its Oracle Properties
- The screening and ranking algorithm to detect DNA copy number variations
- UPS delivers optimal phase diagram in high-dimensional variable selection
- Uncertainty Principles and Signal Recovery
- Uncertainty principles and ideal atomic decomposition
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
Cited In (10)
- The Kendall interaction filter for variable interaction screening in high dimensional classification problems
- Covariate assisted screening and estimation
- Inference for sparse linear regression based on the leave-one-covariate-out solution path
- On sufficient variable screening using log odds ratio filter
- Variable selection in functional regression models: a review
- Covariate Information Number for Feature Screening in Ultrahigh-Dimensional Supervised Problems
- Which bridge estimator is the best for variable selection?
- Adaptive sparse estimation with side information
- Higher criticism for large-scale inference, especially for rare and weak effects
- High-dimensional variable screening under multicollinearity
Uses Software
This page was built for publication: Covariate assisted screening and estimation
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q482879)