On the conditions used to prove oracle results for the Lasso

From MaRDI portal
Publication:1952029


DOI10.1214/09-EJS506zbMath1327.62425arXiv0910.0722WikidataQ98839733 ScholiaQ98839733MaRDI QIDQ1952029

Sara van de Geer, Peter Bühlmann

Publication date: 27 May 2013

Published in: Electronic Journal of Statistics (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/0910.0722


62J07: Ridge regression; shrinkage estimators (Lasso)

62G05: Nonparametric estimation

62C05: General considerations in statistical decision theory


Related Items

A Cluster Elastic Net for Multivariate Regression, Covariate Selection in High-Dimensional Generalized Linear Models With Measurement Error, On asymptotically optimal confidence regions and tests for high-dimensional models, Regularized estimation in sparse high-dimensional time series models, False Discovery Rate Control Under General Dependence By Symmetrized Data Aggregation, Asymptotic normality and optimalities in estimation of large Gaussian graphical models, Adaptive estimation of the baseline hazard function in the Cox model by model selection, with high-dimensional covariates, Local linear smoothing for sparse high dimensional varying coefficient models, Adaptive kernel estimation of the baseline function in the Cox model with high-dimensional covariates, Regularity properties for sparse regression, An analysis of penalized interaction models, Censored linear model in high dimensions. Penalised linear regression on high-dimensional data with left-censored response variable, SLOPE is adaptive to unknown sparsity and asymptotically minimax, Oracle inequalities for the Lasso in the high-dimensional Aalen multiplicative intensity model, The benefit of group sparsity in group inference with de-biased scaled group Lasso, The \(l_q\) consistency of the Dantzig selector for Cox's proportional hazards model, Oracle inequalities for the lasso in the Cox model, Statistical significance in high-dimensional linear models, Impacts of high dimensionality in finite samples, Non-negative least squares for high-dimensional linear models: consistency and sparse recovery without regularization, Correlated variables in regression: clustering and sparse estimation, Sharp support recovery from noisy random measurements by \(\ell_1\)-minimization, Transductive versions of the Lasso and the Dantzig selector, Regularization for Cox's proportional hazards model with NP-dimensionality, On higher order isotropy conditions and lower bounds for sparse quadratic forms, Normalized and standard Dantzig estimators: two approaches, Oracle inequalities for high dimensional vector autoregressions, Robust inference on average treatment effects with possibly more covariates than observations, Decomposable norm minimization with proximal-gradient homotopy algorithm, Additive model selection, Exponential screening and optimal rates of sparse estimation, \(\ell_{1}\)-penalization for mixture regression models, Rejoinder to the comments on: \(\ell _{1}\)-penalization for mixture regression models, Generalization of constraints for high dimensional regression problems, Generalized M-estimators for high-dimensional Tobit I models, High-dimensional regression with noisy and missing data: provable guarantees with nonconvexity, Fast global convergence of gradient methods for high-dimensional statistical recovery, Accuracy guaranties for \(\ell_{1}\) recovery of block-sparse signals, Fitting sparse linear models under the sufficient and necessary condition for model identification, High dimensional regression for regenerative time-series: an application to road traffic modeling, Bayesian linear regression with sparse priors, Efficient nonconvex sparse group feature selection via continuous and discrete optimization, \(\ell_1\)-regularization of high-dimensional time-series models with non-Gaussian and heteroskedastic errors, Finite mixture regression: a sparse variable selection by model selection for clustering, On the prediction loss of the Lasso in the partially labeled setting, A general family of trimmed estimators for robust high-dimensional data analysis, On the post selection inference constant under restricted isometry properties, Best subset binary prediction, Generalized Kalman smoothing: modeling and algorithms, Folded concave penalized sparse linear regression: sparsity, statistical performance, and algorithmic theory for local solutions, A Rice method proof of the null-space property over the Grassmannian, Asymptotically honest confidence regions for high dimensional parameters by the desparsified conservative Lasso, A two-stage regularization method for variable selection and forecasting in high-order interaction model, A systematic review on model selection in high-dimensional regression, Optimal Kullback-Leibler aggregation in mixture density estimation by maximum likelihood, Oracle inequalities for high-dimensional prediction, I-LAMM for sparse learning: simultaneous control of algorithmic complexity and statistical error, High-dimensional inference: confidence intervals, \(p\)-values and R-software \texttt{hdi}, On the exponentially weighted aggregate with the Laplace prior, The convex geometry of linear inverse problems, High-dimensional additive hazards models and the lasso, Estimating networks with jumps, The Lasso problem and uniqueness, PAC-Bayesian bounds for sparse regression estimation with exponential weights, The Lasso as an \(\ell _{1}\)-ball model selection procedure, The adaptive and the thresholded Lasso for potentially misspecified models (and a lower bound for the Lasso), Sparsity considerations for dependent variables, The smooth-Lasso and other \(\ell _{1}+\ell _{2}\)-penalized methods, Spatially-adaptive sensing in nonparametric regression, Sign-constrained least squares estimation for high-dimensional regression, Approximate \(\ell_0\)-penalized estimation of piecewise-constant signals on graphs, Empirical Bayes oracle uncertainty quantification for regression, Inference under Fine-Gray competing risks model with high-dimensional covariates, Greedy variance estimation for the LASSO, Simultaneous feature selection and clustering based on square root optimization, Sparse space-time models: concentration inequalities and Lasso, Inference without compatibility: using exponential weighting for inference on a parameter of a linear model, The distribution of the Lasso: uniform control over sparse balls and adaptive parameter tuning, Control variate selection for Monte Carlo integration, Analysis of generalized Bregman surrogate algorithms for nonsmooth nonconvex statistical learning, High-dimensional inference for linear model with correlated errors, The finite sample properties of sparse M-estimators with pseudo-observations, In defense of the indefensible: a very naïve approach to high-dimensional inference, Robust subset selection, Fundamental barriers to high-dimensional regression with convex penalties, Canonical thresholding for nonsparse high-dimensional linear regression, Sparse high-dimensional linear regression. Estimating squared error and a phase transition, Bayesian high-dimensional semi-parametric inference beyond sub-Gaussian errors, The variable selection by the Dantzig selector for Cox's proportional hazards model, Extreme eigenvalues of nonlinear correlation matrices with applications to additive models, High-dimensional regression with potential prior information on variable importance, Inference for high-dimensional instrumental variables regression, ERM and RERM are optimal estimators for regression problems when malicious outliers corrupt the labels, A general framework for Bayes structured linear models, A look at robustness and stability of \(\ell_1\)-versus \(\ell_0\)-regularization: discussion of papers by Bertsimas et al. and Hastie et al., Pivotal estimation via square-root lasso in nonparametric regression, Sparse semiparametric discriminant analysis, High-dimensional variable screening and bias in subsequent inference, with an empirical comparison, Sparse distance metric learning, A global homogeneity test for high-dimensional linear regression, Unnamed Item, Sparsest representations and approximations of an underdetermined linear system, Goodness-of-Fit Tests for High Dimensional Linear Models, Unnamed Item, A study on tuning parameter selection for the high-dimensional lasso, A Simple Method for Estimating Interactions Between a Treatment and a Large Number of Covariates, A Sparse Learning Approach to Relative-Volatility-Managed Portfolio Selection, Unnamed Item, Unnamed Item, Graph-Based Regularization for Regression Problems with Alignment and Highly Correlated Designs, REMI: REGRESSION WITH MARGINAL INFORMATION AND ITS APPLICATION IN GENOME-WIDE ASSOCIATION STUDIES, Poisson Regression With Error Corrupted High Dimensional Features, High-Dimensional Learning Under Approximate Sparsity with Applications to Nonsmooth Estimation and Regularized Neural Networks, A proximal dual semismooth Newton method for zero-norm penalized quantile regression estimator, Sparse linear regression models of high dimensional covariates with non-Gaussian outliers and Berkson error-in-variable under heteroscedasticity, Adaptive Bayesian SLOPE: Model Selection With Incomplete Data, Calibrated zero-norm regularized LS estimator for high-dimensional error-in-variables regression, Regularized estimation of high‐dimensional vector autoregressions with weakly dependent innovations, Optimal Sparse Linear Prediction for Block-missing Multi-modality Data Without Imputation, Oracle inequalities for the Lasso in the additive hazards model with interval-censored data, High-dimensional linear model selection motivated by multiple testing, Sparse recovery from extreme eigenvalues deviation inequalities, UNIFORM INFERENCE IN HIGH-DIMENSIONAL DYNAMIC PANEL DATA MODELS WITH APPROXIMATELY SPARSE FIXED EFFECTS, On the finite-sample analysis of \(\Theta\)-estimators, On the finite-sample analysis of \(\Theta\)-estimators, Consistent parameter estimation for Lasso and approximate message passing, The Lasso for High Dimensional Regression with a Possible Change Point, Confidence Intervals for Low Dimensional Parameters in High Dimensional Linear Models, Quasi-likelihood and/or robust estimation in high dimensions, A selective review of group selection in high-dimensional models, High-dimensional regression with unknown variance, A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers, A general theory of concave regularization for high-dimensional sparse estimation problems, Estimating structured high-dimensional covariance and precision matrices: optimal rates and adaptive estimation, Randomized maximum-contrast selection: subagging for large-scale regression, Comments on: \(\ell _{1}\)-penalization for mixture regression models, Oracle Inequalities for Convex Loss Functions with Nonlinear Targets, The robust desparsified lasso and the focused information criterion for high-dimensional generalized linear models, Markov Neighborhood Regression for High-Dimensional Inference, Gaining Outlier Resistance With Progressive Quantiles: Fast Algorithms and Theoretical Studies, Counterfactual Analysis With Artificial Controls: Inference, High Dimensions, and Nonstationarity, Binacox: automatic cut‐point detection in high‐dimensional Cox model with applications in genetics, Rejoinder to “Reader reaction to ‘Outcome‐adaptive Lasso: Variable selection for causal inference’ by Shortreed and Ertefaie (2017)”, Group sparse recovery via group square-root elastic net and the iterative multivariate thresholding-based algorithm, High-Dimensional Gaussian Graphical Regression Models with Covariates, ESTIMATION OF A HIGH-DIMENSIONAL COUNTING PROCESS WITHOUT PENALTY FOR HIGH-FREQUENCY EVENTS, Structure learning of exponential family graphical model with false discovery rate control, An efficient GPU-parallel coordinate descent algorithm for sparse precision matrix estimation via scaled Lasso, Sparse quantile regression, Estimation and inference of treatment effects with \(L_2\)-boosting in high-dimensional settings, Two-stage communication-efficient distributed sparse M-estimation with missing data, Prediction error bounds for linear regression with the TREX, Prediction and estimation consistency of sparse multi-class penalized optimal scoring, Sorted concave penalized regression, Structured analysis of the high-dimensional FMR model, A review of Gaussian Markov models for conditional independence, Sharp oracle inequalities for low-complexity priors, Double machine learning with gradient boosting and its application to the Big \(N\) audit quality effect, Doubly penalized estimation in additive regression with high-dimensional data, The Dantzig selector for a linear model of diffusion processes, Weaker regularity conditions and sparse recovery in high-dimensional regression, Penalized least squares estimation in the additive model with different smoothness for the components, Lasso for sparse linear regression with exponentially \(\beta\)-mixing errors, A simple homotopy proximal mapping algorithm for compressive sensing, Multi-stage convex relaxation for feature selection, On the uniform convergence of empirical norms and inner products, with application to causal inference, Strong oracle optimality of folded concave penalized estimation, Double-estimation-friendly inference for high-dimensional misspecified models, Sparse estimation via lower-order penalty optimization methods in high-dimensional linear regression, Recovery of partly sparse and dense signals, An1-oracle inequality for the Lasso in multivariate finite mixture of multivariate Gaussian regression models, Randomized pick-freeze for sparse Sobol indices estimation in high dimension, Estimation for High-Dimensional Linear Mixed-Effects Models Using ℓ1-Penalization, Non-asymptotic oracle inequalities for the Lasso and Group Lasso in high dimensional logistic model, Variable selection in partial linear regression with functional covariate, A component lasso



Cites Work