Sparse Sliced Inverse Regression via Cholesky Matrix Penalization
From MaRDI portal
Publication:5040485
Cites work
- A convex formulation for high-dimensional sparse sliced inverse regression
- Adaptive Lasso for sparse high-dimensional regression models
- An Adaptive Estimation of Dimension Reduction Space
- High-dimensional statistics. A non-asymptotic viewpoint
- Measuring and testing dependence by correlation of distances
- On Directional Regression for Dimension Reduction
- On Principal Hessian Directions for Data Visualization and Dimension Reduction: Another Application of Stein's Lemma
- On consistency and sparsity for sliced inverse regression in high dimensions
- On the optimality of sliced inverse regression in high dimensions
- Principal Hessian Directions Revisited
- Principal quantile regression for sufficient dimension reduction with heteroscedasticity
- Save: a method for dimension reduction and graphics in regression
- Sequential sufficient dimension reduction for large \(p\), small \(n\) problems
- Sliced Inverse Regression for Dimension Reduction
- Sparse minimum discrepancy approach to sufficient dimension reduction with simultaneous variable selection in ultrahigh dimension
- Sparse sliced inverse regression via Lasso
- Sufficient dimension reduction: methods and applications with R
- Testing predictor contributions in sufficient dimension reduction.
- The Adaptive Lasso and Its Oracle Properties
- Using the Bootstrap to Select One of a New Class of Dimension Reduction Methods
Cited in
(3)
This page was built for publication: Sparse Sliced Inverse Regression via Cholesky Matrix Penalization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5040485)