The sparsity and bias of the LASSO selection in high-dimensional linear regression
From MaRDI portal
Publication:939654
DOI10.1214/07-AOS520zbMath1142.62044arXiv0808.0967OpenAlexW3100041486WikidataQ105584236 ScholiaQ105584236MaRDI QIDQ939654
Publication date: 28 August 2008
Published in: The Annals of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/0808.0967
random matricesspectral analysisbiashigh-dimensional datavariable selectionpenalized regressionrate consistency
Nonparametric regression and quantile regression (62G08) Estimation in multivariate analysis (62H12) Ridge regression; shrinkage estimators (Lasso) (62J07) Linear regression; mixed models (62J05)
Related Items
Group screening for ultra-high-dimensional feature under linear model, Variable screening in multivariate linear regression with high-dimensional covariates, Sequential profile Lasso for ultra-high-dimensional partially linear models, Integrating Multisource Block-Wise Missing Data in Model Selection, Consistency of BIC Model Averaging, Structured sparse support vector machine with ordered features, Overlapping group lasso for high-dimensional generalized linear models, Variable selection for partially varying coefficient model based on modal regression under high dimensional data, In defense of LASSO, Two-step variable selection in partially linear additive models with time series data, Sparsity identification for high-dimensional partially linear model with measurement error, Sure independence screening for analyzing supersaturated designs, On estimation error bounds of the Elastic Net when p ≫ n, Sample average approximation with heavier tails II: localization in stochastic convex optimization and persistence results for the Lasso, Multistage Convex Relaxation Approach to Rank Regularized Minimization Problems Based on Equivalent Mathematical Program with a Generalized Complementarity Constraint, Solving constrained nonsmooth group sparse optimization via group Capped-\(\ell_1\) relaxation and group smoothing proximal gradient algorithm, Grouped variable selection with discrete optimization: computational and statistical perspectives, Dummy endogenous treatment effect estimation using high‐dimensional instrumental variables, Variance estimation in high-dimensional linear regression via adaptive elastic-net, Globally Adaptive Longitudinal Quantile Regression With High Dimensional Compositional Covariates, A communication-efficient method for ℓ0 regularization linear regression models, Inferences for extended partially linear single-index models, On Joint Estimation of Gaussian Graphical Models for Spatial and Temporal Data, Penalized wavelet nonparametric univariate logistic regression for irregular spaced data, A nonconvex nonsmooth image prior based on the hyperbolic tangent function, Communication-efficient estimation for distributed subset selection, Quantile forward regression for high-dimensional survival data, Controlling False Discovery Rate Using Gaussian Mirrors, Honest Confidence Sets for High-Dimensional Regression by Projection and Shrinkage, Debiasing convex regularized estimators and interval estimation in linear models, Generalized matrix decomposition regression: estimation and inference for two-way structured data, Post-selection Inference of High-dimensional Logistic Regression Under Case–Control Design, Statistical Learning for Individualized Asset Allocation, Forward-selected panel data approach for program evaluation, Inference for High-Dimensional Censored Quantile Regression, Two-stage communication-efficient distributed sparse M-estimation with missing data, A quadratic upper bound algorithm for regression analysis of credit risk under the proportional hazards model with case-cohort data, Improving the accuracy and internal consistency of regression-based clustering of high-dimensional datasets, A nonlinear mixed–integer programming approach for variable selection in linear regression model, Multiple Change Point Detection in Reduced Rank High Dimensional Vector Autoregressive Models, Culling the Herd of Moments with Penalized Empirical Likelihood, Homogeneity and Sparsity Analysis for High-Dimensional Panel Data Models, A joint estimation for the high-dimensional regression modeling on stratified data, GAP: A General Framework for Information Pooling in Two-Sample Sparse Inference, Optimal Sparse Linear Prediction for Block-missing Multi-modality Data Without Imputation, Moderate-Dimensional Inferences on Quadratic Functionals in Ordinary Least Squares, On the finite-sample analysis of \(\Theta\)-estimators, Unnamed Item, Convex Optimization for Group Feature Selection in Networked Data, On the finite-sample analysis of \(\Theta\)-estimators, A Tuning-free Robust and Efficient Approach to High-dimensional Regression, Feature Screening for Network Autoregression Model, Truncated $L^1$ Regularized Linear Regression: Theory and Algorithm, Integrative Analysis of Cancer Diagnosis Studies with Composite Penalization, Consistent parameter estimation for Lasso and approximate message passing, Group variable selection via SCAD-L2, A polynomial algorithm for best-subset selection problem, Confidence Intervals for Low Dimensional Parameters in High Dimensional Linear Models, Inference for biased models: a quasi-instrumental variable approach, A selective review of group selection in high-dimensional models, A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers, A general theory of concave regularization for high-dimensional sparse estimation problems, Performance bounds for parameter estimates of high-dimensional linear models with correlated errors, Weak signals in high‐dimensional regression: Detection, estimation and prediction, Unnamed Item, Discussion: One-step sparse estimates in nonconcave penalized likelihood models, Unnamed Item, Variance estimation based on blocked 3×2 cross-validation in high-dimensional linear regression, Structural identification and variable selection in high-dimensional varying-coefficient models, On cross-validated Lasso in high dimensions, Local linear smoothing for sparse high dimensional varying coefficient models, Fitting sparse linear models under the sufficient and necessary condition for model identification, Best subset selection via a modern optimization lens, An analysis of penalized interaction models, Selection of fixed effects in high dimensional linear mixed models using a multicycle ECM algorithm, Testing predictor significance with ultra high dimensional multivariate responses, Sparse recovery via nonconvex regularized \(M\)-estimators over \(\ell_q\)-balls, Penalized logspline density estimation using total variation penalty, Some sharp performance bounds for least squares regression with \(L_1\) regularization, High-dimensional variable selection, Adaptive shrinkage of singular values, Oracle inequalities for the Lasso in the high-dimensional Aalen multiplicative intensity model, Variable selection in censored quantile regression with high dimensional data, The benefit of group sparsity in group inference with de-biased scaled group Lasso, Least squares approximation with a diverging number of parameters, Test for high-dimensional regression coefficients using refitted cross-validation variance estimation, Sub-optimality of some continuous shrinkage priors, Trace regression model with simultaneously low rank and row(column) sparse parameter, Inference for biased transformation models, On stepwise pattern recovery of the fused Lasso, Strong consistency of Lasso estimators, On the sign consistency of the Lasso for the high-dimensional Cox model, Moderately clipped Lasso, Oracle inequalities for the lasso in the Cox model, Rates of convergence of the adaptive LASSO estimators to the oracle distribution and higher order refinements by the bootstrap, Statistical significance in high-dimensional linear models, Sparse recovery under matrix uncertainty, Polynomial spline estimation for generalized varying coefficient partially linear models with a diverging number of components, Asymptotic properties of Lasso+mLS and Lasso+Ridge in sparse high-dimensional linear regression, Generalized \(F\) test for high dimensional linear regression coefficients, Non-negative least squares for high-dimensional linear models: consistency and sparse recovery without regularization, Adjusted regularized estimation in the accelerated failure time model with high dimensional covariates, Variable selection in high-dimensional quantile varying coefficient models, Variable selection and regression analysis for graph-structured covariates with an application to genomics, Correlated variables in regression: clustering and sparse estimation, Bayesian linear regression with sparse priors, Controlling the false discovery rate via knockoffs, Globally adaptive quantile regression with ultra-high dimensional data, \(\ell_{1}\)-penalization for mixture regression models, Model selection and structure specification in ultra-high dimensional generalised semi-varying coefficient models, High-dimensional simultaneous inference with the bootstrap, Consistent group selection in high-dimensional linear regression, Shrinkage estimation for identification of linear components in additive models, Generalized F-test for high dimensional regression coefficients of partially linear models, Adaptive Dantzig density estimation, Group selection in high-dimensional partially linear additive models, Group coordinate descent algorithms for nonconvex penalized regression, Non-convex penalized estimation in high-dimensional models with single-index structure, Sparse regression learning by aggregation and Langevin Monte-Carlo, Mirror averaging with sparsity priors, Estimation in high-dimensional linear models with deterministic design matrices, Regularization for Cox's proportional hazards model with NP-dimensionality, Lasso penalized model selection criteria for high-dimensional multivariate linear regression analysis, The sparse Laplacian shrinkage estimator for high-dimensional regression, Parametric or nonparametric? A parametricness index for model selection, Oracle inequalities and optimal inference under group sparsity, Focused vector information criterion model selection and model averaging regression with missing response, Bayesian high-dimensional screening via MCMC, SCAD penalized rank regression with a diverging number of parameters, Concave group methods for variable selection and estimation in high-dimensional varying coefficient models, A sharp nonasymptotic bound and phase diagram of \(L_{1/2}\) regularization, Nonconvex penalized ridge estimations for partially linear additive models in ultrahigh dimension, Optimal computational and statistical rates of convergence for sparse nonconvex learning problems, CAM: causal additive models, high-dimensional order search and penalized regression, High-dimensional Bayesian inference in nonparametric additive models, Discussion: One-step sparse estimates in nonconcave penalized likelihood models, On the residual empirical process based on the ALASSO in high dimensions and its functional oracle property, Going beyond oracle property: selection consistency and uniqueness of local solution of the generalized linear model, Pathwise coordinate optimization for sparse learning: algorithm and theory, High dimensional censored quantile regression, Bayesian estimation of sparse signals with a continuous spike-and-slab prior, Additive model selection, Tuning parameter selection for the adaptive LASSO in the autoregressive model, High-dimensional regression with noisy and missing data: provable guarantees with nonconvexity, Exponential screening and optimal rates of sparse estimation, Performance guarantees for individualized treatment rules, Consistent tuning parameter selection in high dimensional sparse linear regression, Semi-varying coefficient models with a diverging number of components, Least angle and \(\ell _{1}\) penalized regression: a review, On asymptotically optimal confidence regions and tests for high-dimensional models, Lasso Inference for High-Dimensional Time Series, Nearly unbiased variable selection under minimax concave penalty, The benefit of group sparsity, Optimal rates of convergence for covariance matrix estimation, Variable selection in nonparametric additive models, SPADES and mixture models, Lasso-type recovery of sparse representations for high-dimensional data, Two sample tests for high-dimensional covariance matrices, Asymptotic normality and optimalities in estimation of large Gaussian graphical models, Fast global convergence of gradient methods for high-dimensional statistical recovery, Testing covariates in high-dimensional regression, Variable selection in the accelerated failure time model via the bridge method, On Bayesian lasso variable selection and the specification of the shrinkage parameter, APPLE: approximate path for penalized likelihood estimators, Spline estimator for simultaneous variable selection and constant coefficient identification in high-dimensional generalized varying-coefficient models, SCAD-penalized regression in high-dimensional partially linear models, Tournament screening cum EBIC for feature selection with high-dimensional feature spaces, High-dimensional additive modeling, Adaptive estimation of the baseline hazard function in the Cox model by model selection, with high-dimensional covariates, Modeling association between multivariate correlated outcomes and high-dimensional sparse covariates: the adaptive SVS method, REMI: REGRESSION WITH MARGINAL INFORMATION AND ITS APPLICATION IN GENOME-WIDE ASSOCIATION STUDIES, Bayesian inference in high-dimensional linear models using an empirical correlation-adaptive prior, \(\ell_0\)-regularized high-dimensional accelerated failure time model, Variable selection for semiparametric regression models with iterated penalisation, Inference for low-rank tensors -- no need to debias, Adaptive log-density estimation, De-biasing the Lasso with degrees-of-freedom adjustment, Using Improved Robust Estimators to Semiparametric Model with High Dimensional Data, Goodness-of-fit tests for high-dimensional Gaussian linear models, Estimation for High-Dimensional Linear Mixed-Effects Models Using ℓ1-Penalization, Identification of Partially Linear Structure in Additive Models with an Application to Gene Expression Prediction from Sequences, SCAD-Penalized Least Absolute Deviation Regression in High-Dimensional Models, Sparse reduced-rank regression for multivariate varying-coefficient models, Simultaneous analysis of Lasso and Dantzig selector, Bayesian factor-adjusted sparse regression, Robust post-selection inference of high-dimensional mean regression with heavy-tailed asymmetric or heteroskedastic errors, Monte Carlo Simulation for Lasso-Type Problems by Estimator Augmentation, Identification of breast cancer prognosis markers via integrative analysis, An iterative algorithm for fitting nonconvex penalized generalized linear models with grouped predictors, \(\ell _{1}\)-regularized linear regression: persistence and oracle inequalities, D-trace estimation of a precision matrix using adaptive lasso penalties, A nonconvex model with minimax concave penalty for image restoration, Non-asymptotic oracle inequalities for the Lasso and Group Lasso in high dimensional logistic model, Some improved estimation strategies in high-dimensional semiparametric regression models with application to riboflavin production data, Nearly optimal Bayesian shrinkage for high-dimensional regression, A simple homotopy proximal mapping algorithm for compressive sensing, Needles and straw in a haystack: posterior concentration for possibly sparse sequences, Sup-norm convergence rate and sign concentration property of Lasso and Dantzig estimators, Sparse estimation via lower-order penalty optimization methods in high-dimensional linear regression, \(\ell_1\)-penalized quantile regression in high-dimensional sparse models, Adaptive Lasso for generalized linear models with a diverging number of parameters, Multi-stage convex relaxation for feature selection, Thresholding-based iterative selection procedures for model selection and shrinkage, On the conditions used to prove oracle results for the Lasso, Sparse regression with exact clustering, Adaptive estimation of covariance matrices via Cholesky decomposition, The Lasso as an \(\ell _{1}\)-ball model selection procedure, The adaptive and the thresholded Lasso for potentially misspecified models (and a lower bound for the Lasso), The smooth-Lasso and other \(\ell _{1}+\ell _{2}\)-penalized methods, Least squares after model selection in high-dimensional sparse models, High-dimensional sparse portfolio selection with nonnegative constraint, Multiple structural breaks in cointegrating regressions: a model selection approach, Calibrating nonconvex penalized regression in ultra-high dimension, Sign-constrained least squares estimation for high-dimensional regression, Double-slicing assisted sufficient dimension reduction for high-dimensional censored data, Which bridge estimator is the best for variable selection?, A general framework for Bayes structured linear models, Estimation in the presence of many nuisance parameters: composite likelihood and plug-in likelihood, Adaptive Lasso estimators for ultrahigh dimensional generalized linear models, Unnamed Item, Multiple predictingK-fold cross-validation for model selection, Greedy forward regression for variable screening, Debiasing the Lasso: optimal sample size for Gaussian designs, Group variable selection for data with dependent structures, Pivotal estimation via square-root lasso in nonparametric regression, High-dimensional variable screening and bias in subsequent inference, with an empirical comparison, Adaptive group Lasso for high-dimensional generalized linear models, Learning latent variable Gaussian graphical model for biomolecular network with low sample complexity, Parametric and semiparametric reduced-rank regression with flexible sparsity, Greedy variance estimation for the LASSO, Unnamed Item, Consistency of Bayesian linear model selection with a growing number of parameters, Group Regularized Estimation Under Structural Hierarchy, Sorted concave penalized regression, Endogeneity in high dimensions, A unified primal dual active set algorithm for nonconvex sparse recovery, Optimal sparsity testing in linear regression model, Convergence and sparsity of Lasso and group Lasso in high-dimensional generalized linear models, Variable selection for longitudinal data with high-dimensional covariates and dropouts, Adaptive group bridge selection in the semiparametric accelerated failure time model, Robust group non-convex estimations for high-dimensional partially linear models, High-dimensional regression in practice: an empirical study of finite-sample prediction, variable selection and ranking, Second-order Stein: SURE for SURE and other applications in high-dimensional inference, Model Selection via Bayesian Information Criterion for Quantile Regression Models, Interaction Screening for Ultrahigh-Dimensional Data, A Simple Method for Estimating Interactions Between a Treatment and a Large Number of Covariates, GENERALIZED ADDITIVE PARTIAL LINEAR MODELS WITH HIGH-DIMENSIONAL COVARIATES, Optimal linear discriminators for the discrete choice model in growing dimensions, In defense of the indefensible: a very naïve approach to high-dimensional inference, Confidence intervals for parameters in high-dimensional sparse vector autoregression, Unnamed Item, Feature selection for data integration with mixed multiview data, Localized Gaussian width of \(M\)-convex hulls with applications to Lasso and convex aggregation, A Mixed-Integer Fractional Optimization Approach to Best Subset Selection, A knockoff filter for high-dimensional selective inference, Nonbifurcating Phylogenetic Tree Inference via the Adaptive LASSO, Sample average approximation with sparsity-inducing penalty for high-dimensional stochastic programming, On Cross-Validation for Sparse Reduced Rank Regression, A NEW APPROACH TO SELECT THE BEST SUBSET OF PREDICTORS IN LINEAR REGRESSION MODELLING: BI-OBJECTIVE MIXED INTEGER LINEAR PROGRAMMING, Sure Independence Screening for Ultrahigh Dimensional Feature Space, Stability Selection, High-dimensional linear regression with hard thresholding regularization: theory and algorithm, Asymptotic Equivalence of Regularization Methods in Thresholded Parameter Space, Unnamed Item, Minimax-optimal nonparametric regression in high dimensions, A new test for part of high dimensional regression coefficients, High-dimensional variable screening through kernel-based conditional mean dependence, Variable selection in high-dimensional partly linear additive models, An adaptive accelerated proximal gradient method and its homotopy continuation for sparse optimization
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- The Adaptive Lasso and Its Oracle Properties
- Lasso-type recovery of sparse representations for high-dimensional data
- The smallest eigenvalue of a large dimensional Wishart matrix
- A limit theorem for the norm of random matrices
- Minimax risk over \(l_ p\)-balls for \(l_ q\)-error
- Persistene in high-dimensional linear predictor-selection and the virtue of overparametrization
- Asymptotics for Lasso-type estimators.
- Least angle regression. (With discussion)
- The risk inflation criterion for multiple regression
- High-dimensional generalized linear models and the lasso
- Sparsity oracle inequalities for the Lasso
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- High-dimensional graphs and variable selection with the Lasso
- A new approach to variable selection in least squares problems
- For most large underdetermined systems of equations, the minimal 𝓁1‐norm near‐solution approximates the sparsest near‐solution