Sparse Sliced Inverse Regression via Cholesky Matrix Penalization
From MaRDI portal
Publication:5040485
DOI10.5705/ss.202020.0406OpenAlexW3154356896WikidataQ114013812 ScholiaQ114013812MaRDI QIDQ5040485
Samuel Müller, Linh H. Nghiem, Francis K. C. Hui, A. H. Welsh
Publication date: 14 October 2022
Published in: Statistica Sinica (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2104.09838
Related Items
Cites Work
- Unnamed Item
- Unnamed Item
- Measuring and testing dependence by correlation of distances
- The Adaptive Lasso and Its Oracle Properties
- Sparse Sliced Inverse Regression Via Lasso
- Principal quantile regression for sufficient dimension reduction with heteroscedasticity
- On consistency and sparsity for sliced inverse regression in high dimensions
- Testing predictor contributions in sufficient dimension reduction.
- On the optimality of sliced inverse regression in high dimensions
- On Directional Regression for Dimension Reduction
- Principal Hessian Directions Revisited
- Sliced Inverse Regression for Dimension Reduction
- On Principal Hessian Directions for Data Visualization and Dimension Reduction: Another Application of Stein's Lemma
- Using the Bootstrap to Select One of a New Class of Dimension Reduction Methods
- Save: a method for dimension reduction and graphics in regression
- A convex formulation for high-dimensional sparse sliced inverse regression
- High-Dimensional Statistics
- An Adaptive Estimation of Dimension Reduction Space
- Sparse Minimum Discrepancy Approach to Sufficient Dimension Reduction with Simultaneous Variable Selection in Ultrahigh Dimension
- Sequential Sufficient Dimension Reduction for Large p, Small n Problems