Prediction error bounds for linear regression with the TREX
From MaRDI portal
Publication:2273161
DOI10.1007/s11749-018-0584-4zbMath1420.62304arXiv1801.01394OpenAlexW2964284244WikidataQ129870070 ScholiaQ129870070MaRDI QIDQ2273161
Irina Gaynanova, Christian L. Müller, Jacob Bien, Johannes Lederer
Publication date: 18 September 2019
Published in: Test (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1801.01394
Related Items (4)
A self-calibrated direct approach to precision matrix estimation and linear discriminant analysis in high dimensions ⋮ Tuning-free ridge estimators for high-dimensional generalized linear models ⋮ A Tuning-free Robust and Efficient Approach to High-dimensional Regression ⋮ Prediction and estimation consistency of sparse multi-class penalized optimal scoring
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Nearly unbiased variable selection under minimax concave penalty
- The Bernstein-Orlicz norm and deviation inequalities
- Sparse regression learning by aggregation and Langevin Monte-Carlo
- Mirror averaging with sparsity priors
- New concentration inequalities for suprema of empirical processes
- On the prediction performance of the Lasso
- Statistics for high-dimensional data. Methods, theory and applications.
- Exponential screening and optimal rates of sparse estimation
- Segmentation of the mean of heteroscedastic data via cross-validation
- Nuclear-norm penalization and optimal rates for noisy low-rank matrix completion
- Near-ideal model selection by \(\ell _{1}\) minimization
- Controlling the false discovery rate via knockoffs
- A survey of cross-validation procedures for model selection
- Gaussian model selection with an unknown variance
- The Bennett-Orlicz norm
- Oracle inequalities for high-dimensional prediction
- Weak convergence and empirical processes. With applications to statistics
- On the conditions used to prove oracle results for the Lasso
- The Lasso as an \(\ell _{1}\)-ball model selection procedure
- Simultaneous analysis of Lasso and Dantzig selector
- Optimal two-step prediction in regression
- Perspective functions: proximal calculus and applications in high-dimensional statistics
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- A permutation approach for selecting the penalty parameter in penalized model selection
- A Practical Scheme and Fast Algorithm to Tune the Lasso With Optimality Guarantees
- How Correlations Influence Lasso Prediction
- Square-root lasso: pivotal recovery of sparse signals via conic programming
- Scaled sparse linear regression
- Non-Convex Global Minimization and False Discovery Rate Control for the TREX
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- Stability Selection
- Sharp Thresholds for High-Dimensional and Noisy Sparsity Recovery Using $\ell _{1}$-Constrained Quadratic Programming (Lasso)
- Variable Selection with Error Control: Another Look at Stability Selection
- Aggregation and Sparsity Via ℓ1 Penalized Least Squares
- The Group Square-Root Lasso: Theoretical Properties and Fast Algorithms
- Aggregation by Exponential Weighting and Sharp Oracle Inequalities
- Model Selection and Estimation in Regression with Grouped Variables
- The Lasso, correlated design, and improved oracle inequalities
- High-dimensional regression with unknown variance
This page was built for publication: Prediction error bounds for linear regression with the TREX