Prediction error bounds for linear regression with the TREX
From MaRDI portal
Publication:2273161
Abstract: The TREX is a recently introduced approach to sparse linear regression. In contrast to most well-known approaches to penalized regression, the TREX can be formulated without the use of tuning parameters. In this paper, we establish the first known prediction error bounds for the TREX. Additionally, we introduce extensions of the TREX to a more general class of penalties, and we provide a bound on the prediction error in this generalized setting. These results deepen the understanding of TREX from a theoretical perspective and provide new insights into penalized regression in general.
Recommendations
- On tight bounds for the Lasso
- Estimator of prediction error based on approximate message passing for penalized linear regression
- Error bounds for the convex loss Lasso in linear models
- On non-asymptotic bounds for estimation in generalized linear models with highly correlated design
- Oracle inequalities for high-dimensional prediction
Cites work
- scientific article; zbMATH DE number 5654889 (Why is no real title available?)
- scientific article; zbMATH DE number 845714 (Why is no real title available?)
- A permutation approach for selecting the penalty parameter in penalized model selection
- A practical scheme and fast algorithm to tune the Lasso with optimality guarantees
- A survey of cross-validation procedures for model selection
- Aggregation and Sparsity Via ℓ1 Penalized Least Squares
- Aggregation by Exponential Weighting and Sharp Oracle Inequalities
- Concentration inequalities. A nonasymptotic theory of independence
- Controlling the false discovery rate via knockoffs
- Exponential screening and optimal rates of sparse estimation
- Gaussian model selection with an unknown variance
- High-dimensional regression with unknown variance
- How Correlations Influence Lasso Prediction
- Mirror averaging with sparsity priors
- Model Selection and Estimation in Regression with Grouped Variables
- Near-ideal model selection by \(\ell _{1}\) minimization
- Nearly unbiased variable selection under minimax concave penalty
- New concentration inequalities for suprema of empirical processes
- Non-Convex Global Minimization and False Discovery Rate Control for the TREX
- Nuclear-norm penalization and optimal rates for noisy low-rank matrix completion
- On the conditions used to prove oracle results for the Lasso
- On the prediction performance of the Lasso
- Optimal two-step prediction in regression
- Oracle inequalities for high-dimensional prediction
- Perspective functions: proximal calculus and applications in high-dimensional statistics
- Restricted eigenvalue properties for correlated Gaussian designs
- Scaled sparse linear regression
- Segmentation of the mean of heteroscedastic data via cross-validation
- Sharp Thresholds for High-Dimensional and Noisy Sparsity Recovery Using $\ell _{1}$-Constrained Quadratic Programming (Lasso)
- Simultaneous analysis of Lasso and Dantzig selector
- Sparse regression learning by aggregation and Langevin Monte-Carlo
- Square-root lasso: pivotal recovery of sparse signals via conic programming
- Stability Selection
- Statistics for high-dimensional data. Methods, theory and applications.
- The Bennett-Orlicz norm
- The Bernstein-Orlicz norm and deviation inequalities
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- The Group Square-Root Lasso: Theoretical Properties and Fast Algorithms
- The Lasso as an \(\ell _{1}\)-ball model selection procedure
- The Lasso, correlated design, and improved oracle inequalities
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- Variable Selection with Error Control: Another Look at Stability Selection
- Weak convergence and empirical processes. With applications to statistics
Cited in
(6)- A self-calibrated direct approach to precision matrix estimation and linear discriminant analysis in high dimensions
- Tuning-free ridge estimators for high-dimensional generalized linear models
- Layer sparsity in neural networks
- A tuning-free robust and efficient approach to high-dimensional regression
- Integrating additional knowledge into the estimation of graphical models
- Prediction and estimation consistency of sparse multi-class penalized optimal scoring
This page was built for publication: Prediction error bounds for linear regression with the TREX
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2273161)