Estimating sufficient reductions of the predictors in abundant high-dimensional regressions
From MaRDI portal
Publication:116954
DOI10.1214/11-aos962zbMath1246.62150arXiv1205.6556OpenAlexW3103296927MaRDI QIDQ116954
R. Dennis Cook, Adam J. Rothman, Liliana Forzani, R. Dennis Cook, Liliana Forzani, Adam J. Rothman
Publication date: 1 February 2012
Published in: The Annals of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1205.6556
sparsityoracle propertysufficient dimension reductioncentral subspaceprincipal fitted componentsSPICE
Multivariate analysis (62H99) Factor analysis and principal components; correspondence analysis (62H25) Linear inference, regression (62J99)
Related Items
Dimension reduction in binary response regression: a joint modeling approach, On the optimality of sliced inverse regression in high dimensions, Central subspaces review: methods and applications, Prediction in abundant high-dimensional linear regression, Unnamed Item, Double-slicing assisted sufficient dimension reduction for high-dimensional censored data, Dealing with big data: comparing dimension reduction and shrinkage regression methods, The effect of data contamination in sliced inverse regression and finite sample breakdown point, On consistency and sparsity for sliced inverse regression in high dimensions, Estimating sufficient reductions of the predictors in abundant high-dimensional regressions, abundant, Sparse Sliced Inverse Regression Via Lasso, Fused Estimators of the Central Subspace in Sufficient Dimension Reduction, Sparse Minimum Discrepancy Approach to Sufficient Dimension Reduction with Simultaneous Variable Selection in Ultrahigh Dimension
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Estimating sufficient reductions of the predictors in abundant high-dimensional regressions
- Sparse inverse covariance estimation with the graphical lasso
- Fisher lecture: Dimension reduction in regression
- On almost linearity of low dimensional projections from high dimensional data
- Principal fitted components for dimension reduction in regression
- Covariance regularization by thresholding
- Dimension reduction for nonelliptically distributed predictors
- Sparsistency and rates of convergence in large covariance matrix estimation
- Dimension reduction strategies for analyzing global gene expression data with a response
- On the distribution of the largest eigenvalue in principal components analysis
- Testing predictor contributions in sufficient dimension reduction.
- Sparse permutation invariant covariance estimation
- High-dimensional covariance estimation by minimizing \(\ell _{1}\)-penalized log-determinant divergence
- Kernel dimension reduction in regression
- Regularized estimation of large covariance matrices
- Covariance-Regularized Regression and Classification for high Dimensional Problems
- Asymptotic properties of sufficient dimension reduction with a diverging number of predictors
- Model selection and estimation in the Gaussian graphical model
- Shrinkage Inverse Regression Estimation for Model-Free Variable Selection
- Sufficient dimension reduction and prediction in regression
- Dimension reduction for non-elliptically distributed predictors: second-order methods
- First-Order Methods for Sparse Covariance Selection
- Sliced Inverse Regression for Dimension Reduction
- Extending Sliced Inverse Regression
- Theory & Methods: Special Invited Paper: Dimension Reduction and Visualization in Discriminant Analysis (with discussion)
- On Consistency and Sparsity for Principal Components Analysis in High Dimensions
- Dimension Reduction and Semiparametric Estimation of Survival Models
- Dimension Reduction in Regressions Through Cumulative Slicing Estimation
- Likelihood-Based Sufficient Dimension Reduction
- Dimension reduction in regression without matrix inversion
- Sliced Inverse Regression with Regularizations
- On Canonical Forms, Non-Negative Covariance Matrices and Best and Simple Least Squares Linear Estimators in Linear Models
- Sufficient Dimension Reduction via Inverse Regression
- Discussion on boosting papers.