Prediction error bounds for linear regression with the TREX
From MaRDI portal
Publication:2273161
DOI10.1007/S11749-018-0584-4zbMATH Open1420.62304arXiv1801.01394OpenAlexW2964284244WikidataQ129870070 ScholiaQ129870070MaRDI QIDQ2273161FDOQ2273161
Authors: Jacob Bien, Irina Gaynanova, Christian Mueller, Johannes Lederer
Publication date: 18 September 2019
Published in: Test (Search for Journal in Brave)
Abstract: The TREX is a recently introduced approach to sparse linear regression. In contrast to most well-known approaches to penalized regression, the TREX can be formulated without the use of tuning parameters. In this paper, we establish the first known prediction error bounds for the TREX. Additionally, we introduce extensions of the TREX to a more general class of penalties, and we provide a bound on the prediction error in this generalized setting. These results deepen the understanding of TREX from a theoretical perspective and provide new insights into penalized regression in general.
Full work available at URL: https://arxiv.org/abs/1801.01394
Recommendations
- On tight bounds for the Lasso
- Estimator of prediction error based on approximate message passing for penalized linear regression
- Error bounds for the convex loss Lasso in linear models
- On non-asymptotic bounds for estimation in generalized linear models with highly correlated design
- Oracle inequalities for high-dimensional prediction
Cites Work
- Nearly unbiased variable selection under minimax concave penalty
- Weak convergence and empirical processes. With applications to statistics
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- Stability Selection
- Title not available (Why is that?)
- Statistics for high-dimensional data. Methods, theory and applications.
- A survey of cross-validation procedures for model selection
- On the conditions used to prove oracle results for the Lasso
- Simultaneous analysis of Lasso and Dantzig selector
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- Restricted eigenvalue properties for correlated Gaussian designs
- Square-root lasso: pivotal recovery of sparse signals via conic programming
- Sharp Thresholds for High-Dimensional and Noisy Sparsity Recovery Using $\ell _{1}$-Constrained Quadratic Programming (Lasso)
- Model Selection and Estimation in Regression with Grouped Variables
- Gaussian model selection with an unknown variance
- Scaled sparse linear regression
- New concentration inequalities for suprema of empirical processes
- Concentration inequalities. A nonasymptotic theory of independence
- Segmentation of the mean of heteroscedastic data via cross-validation
- Title not available (Why is that?)
- Nuclear-norm penalization and optimal rates for noisy low-rank matrix completion
- Variable Selection with Error Control: Another Look at Stability Selection
- Controlling the false discovery rate via knockoffs
- The Group Square-Root Lasso: Theoretical Properties and Fast Algorithms
- High-dimensional regression with unknown variance
- On the prediction performance of the Lasso
- Aggregation and Sparsity Via ℓ1 Penalized Least Squares
- Exponential screening and optimal rates of sparse estimation
- Near-ideal model selection by \(\ell _{1}\) minimization
- The Lasso as an \(\ell _{1}\)-ball model selection procedure
- Optimal two-step prediction in regression
- The Lasso, correlated design, and improved oracle inequalities
- The Bernstein-Orlicz norm and deviation inequalities
- How Correlations Influence Lasso Prediction
- Sparse regression learning by aggregation and Langevin Monte-Carlo
- Mirror averaging with sparsity priors
- Aggregation by Exponential Weighting and Sharp Oracle Inequalities
- A practical scheme and fast algorithm to tune the Lasso with optimality guarantees
- Oracle inequalities for high-dimensional prediction
- Perspective functions: proximal calculus and applications in high-dimensional statistics
- Non-Convex Global Minimization and False Discovery Rate Control for the TREX
- The Bennett-Orlicz norm
- A permutation approach for selecting the penalty parameter in penalized model selection
Cited In (6)
- A self-calibrated direct approach to precision matrix estimation and linear discriminant analysis in high dimensions
- Tuning-free ridge estimators for high-dimensional generalized linear models
- A tuning-free robust and efficient approach to high-dimensional regression
- Prediction and estimation consistency of sparse multi-class penalized optimal scoring
- Layer sparsity in neural networks
- Integrating additional knowledge into the estimation of graphical models
Uses Software
This page was built for publication: Prediction error bounds for linear regression with the TREX
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2273161)