A selective overview of sparse sufficient dimension reduction
From MaRDI portal
Publication:5880034
DOI10.1080/24754269.2020.1829389OpenAlexW3103200295MaRDI QIDQ5880034
Lu Li, Xuerong Meggie Wen, Zhou Yu
Publication date: 7 March 2023
Published in: Statistical Theory and Related Fields (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1080/24754269.2020.1829389
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Efficient estimation in sufficient dimension reduction
- Model-Free Feature Screening for Ultrahigh-Dimensional Data
- Nearly unbiased variable selection under minimax concave penalty
- Sliced Regression for Dimension Reduction
- The Adaptive Lasso and Its Oracle Properties
- Sparse Sliced Inverse Regression Via Lasso
- On marginal sliced inverse regression for ultrahigh dimensional model-free feature selection
- Coordinate-independent sparse sufficient dimension reduction and variable selection
- Minimax estimation in sparse canonical correlation analysis
- Principal fitted components for dimension reduction in regression
- Dimension reduction based on constrained canonical correlation and variable filtering
- Successive direction extraction for estimating the central subspace in a multiple-index regres\-sion
- Nonconcave penalized inverse regression in single-index models with high dimensional predic\-tors
- On consistency and sparsity for sliced inverse regression in high dimensions
- Testing predictor contributions in sufficient dimension reduction.
- Sparse SIR: optimal rates and adaptive estimation
- Kernel dimension reduction in regression
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- On the optimality of sliced inverse regression in high dimensions
- On Distribution-Weighted Partial Least Squares with Diverging Number of Highly Correlated Predictors
- Asymptotic properties of sufficient dimension reduction with a diverging number of predictors
- Better Subset Regression Using the Nonnegative Garrote
- Shrinkage Inverse Regression Estimation for Model-Free Variable Selection
- On Directional Regression for Dimension Reduction
- Sliced Inverse Regression for Dimension Reduction
- On Principal Hessian Directions for Data Visualization and Dimension Reduction: Another Application of Stein's Lemma
- Graphics for Regressions With a Binary Response
- Estimating central subspaces via inverse third moments
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- A convex formulation for high-dimensional sparse sliced inverse regression
- Sure Independence Screening for Ultrahigh Dimensional Feature Space
- Feature Screening via Distance Correlation Learning
- Dimension Reduction for the Conditionalkth Moment in Regression
- A Semiparametric Approach to Dimension Reduction
- A Review on Dimension Reduction
- Gradient-Based Kernel Dimension Reduction for Regression
- Model-Free Variable Selection With Matrix-Valued Predictors
- Sparse Minimum Discrepancy Approach to Sufficient Dimension Reduction with Simultaneous Variable Selection in Ultrahigh Dimension
- Likelihood-Based Sufficient Dimension Reduction
- A note on shrinkage sliced inverse regression
- Sequential Sufficient Dimension Reduction for Large p, Small n Problems
- Algorithmic Learning Theory
- Dimension reduction and predictor selection in semiparametric models
- Model Selection and Estimation in Regression with Grouped Variables
- Sparse sufficient dimension reduction
- Sliced Inverse Regression with Regularizations
- On Estimation Efficiency of the Central Mean Subspace
- Sufficient Dimension Reduction via Inverse Regression
- On Sliced Inverse Regression With High-Dimensional Covariates
- Comment