On the conditions used to prove oracle results for the Lasso
From MaRDI portal
Publication:1952029
DOI10.1214/09-EJS506zbMath1327.62425arXiv0910.0722OpenAlexW2092058109WikidataQ98839733 ScholiaQ98839733MaRDI QIDQ1952029
Publication date: 27 May 2013
Published in: Electronic Journal of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/0910.0722
coherencecompatibilitysparsityLassoirrepresentable conditionrestricted eigenvaluerestricted isometry
Ridge regression; shrinkage estimators (Lasso) (62J07) Nonparametric estimation (62G05) General considerations in statistical decision theory (62C05)
Related Items (only showing first 100 items - show all)
Fundamental barriers to high-dimensional regression with convex penalties ⋮ Canonical thresholding for nonsparse high-dimensional linear regression ⋮ On the prediction loss of the Lasso in the partially labeled setting ⋮ A general family of trimmed estimators for robust high-dimensional data analysis ⋮ Local linear smoothing for sparse high dimensional varying coefficient models ⋮ Adaptive kernel estimation of the baseline function in the Cox model with high-dimensional covariates ⋮ Fitting sparse linear models under the sufficient and necessary condition for model identification ⋮ Regularity properties for sparse regression ⋮ An analysis of penalized interaction models ⋮ High dimensional regression for regenerative time-series: an application to road traffic modeling ⋮ Censored linear model in high dimensions. Penalised linear regression on high-dimensional data with left-censored response variable ⋮ Sparse high-dimensional linear regression. Estimating squared error and a phase transition ⋮ Bayesian high-dimensional semi-parametric inference beyond sub-Gaussian errors ⋮ On the post selection inference constant under restricted isometry properties ⋮ SLOPE is adaptive to unknown sparsity and asymptotically minimax ⋮ The variable selection by the Dantzig selector for Cox's proportional hazards model ⋮ Oracle inequalities for the Lasso in the high-dimensional Aalen multiplicative intensity model ⋮ The benefit of group sparsity in group inference with de-biased scaled group Lasso ⋮ Extreme eigenvalues of nonlinear correlation matrices with applications to additive models ⋮ High-dimensional regression with potential prior information on variable importance ⋮ The \(l_q\) consistency of the Dantzig selector for Cox's proportional hazards model ⋮ Oracle inequalities for the lasso in the Cox model ⋮ Best subset binary prediction ⋮ Statistical significance in high-dimensional linear models ⋮ Generalized Kalman smoothing: modeling and algorithms ⋮ Impacts of high dimensionality in finite samples ⋮ The convex geometry of linear inverse problems ⋮ Non-negative least squares for high-dimensional linear models: consistency and sparse recovery without regularization ⋮ Correlated variables in regression: clustering and sparse estimation ⋮ Folded concave penalized sparse linear regression: sparsity, statistical performance, and algorithmic theory for local solutions ⋮ Bayesian linear regression with sparse priors ⋮ \(\ell_{1}\)-penalization for mixture regression models ⋮ Rejoinder to the comments on: \(\ell _{1}\)-penalization for mixture regression models ⋮ Efficient nonconvex sparse group feature selection via continuous and discrete optimization ⋮ Inference for high-dimensional instrumental variables regression ⋮ \(\ell_1\)-regularization of high-dimensional time-series models with non-Gaussian and heteroskedastic errors ⋮ Finite mixture regression: a sparse variable selection by model selection for clustering ⋮ Sharp support recovery from noisy random measurements by \(\ell_1\)-minimization ⋮ A Rice method proof of the null-space property over the Grassmannian ⋮ High-dimensional additive hazards models and the lasso ⋮ Estimating networks with jumps ⋮ The Lasso problem and uniqueness ⋮ Asymptotically honest confidence regions for high dimensional parameters by the desparsified conservative Lasso ⋮ PAC-Bayesian bounds for sparse regression estimation with exponential weights ⋮ The Lasso as an \(\ell _{1}\)-ball model selection procedure ⋮ The adaptive and the thresholded Lasso for potentially misspecified models (and a lower bound for the Lasso) ⋮ Sparsity considerations for dependent variables ⋮ The smooth-Lasso and other \(\ell _{1}+\ell _{2}\)-penalized methods ⋮ Spatially-adaptive sensing in nonparametric regression ⋮ Sign-constrained least squares estimation for high-dimensional regression ⋮ ERM and RERM are optimal estimators for regression problems when malicious outliers corrupt the labels ⋮ Transductive versions of the Lasso and the Dantzig selector ⋮ Regularization for Cox's proportional hazards model with NP-dimensionality ⋮ Generalization of constraints for high dimensional regression problems ⋮ A general framework for Bayes structured linear models ⋮ A two-stage regularization method for variable selection and forecasting in high-order interaction model ⋮ A systematic review on model selection in high-dimensional regression ⋮ A look at robustness and stability of \(\ell_1\)-versus \(\ell_0\)-regularization: discussion of papers by Bertsimas et al. and Hastie et al. ⋮ On higher order isotropy conditions and lower bounds for sparse quadratic forms ⋮ Normalized and standard Dantzig estimators: two approaches ⋮ Generalized M-estimators for high-dimensional Tobit I models ⋮ Optimal Kullback-Leibler aggregation in mixture density estimation by maximum likelihood ⋮ Oracle inequalities for high dimensional vector autoregressions ⋮ Robust inference on average treatment effects with possibly more covariates than observations ⋮ Oracle inequalities for high-dimensional prediction ⋮ Decomposable norm minimization with proximal-gradient homotopy algorithm ⋮ Additive model selection ⋮ Approximate \(\ell_0\)-penalized estimation of piecewise-constant signals on graphs ⋮ I-LAMM for sparse learning: simultaneous control of algorithmic complexity and statistical error ⋮ Pivotal estimation via square-root lasso in nonparametric regression ⋮ Empirical Bayes oracle uncertainty quantification for regression ⋮ A Cluster Elastic Net for Multivariate Regression ⋮ High-dimensional regression with noisy and missing data: provable guarantees with nonconvexity ⋮ Sparse semiparametric discriminant analysis ⋮ Covariate Selection in High-Dimensional Generalized Linear Models With Measurement Error ⋮ High-dimensional variable screening and bias in subsequent inference, with an empirical comparison ⋮ Sparse distance metric learning ⋮ Inference under Fine-Gray competing risks model with high-dimensional covariates ⋮ Exponential screening and optimal rates of sparse estimation ⋮ A global homogeneity test for high-dimensional linear regression ⋮ On asymptotically optimal confidence regions and tests for high-dimensional models ⋮ Greedy variance estimation for the LASSO ⋮ Regularized estimation in sparse high-dimensional time series models ⋮ Simultaneous feature selection and clustering based on square root optimization ⋮ Sparse space-time models: concentration inequalities and Lasso ⋮ High-dimensional inference: confidence intervals, \(p\)-values and R-software \texttt{hdi} ⋮ Inference without compatibility: using exponential weighting for inference on a parameter of a linear model ⋮ False Discovery Rate Control Under General Dependence By Symmetrized Data Aggregation ⋮ On the exponentially weighted aggregate with the Laplace prior ⋮ Asymptotic normality and optimalities in estimation of large Gaussian graphical models ⋮ Fast global convergence of gradient methods for high-dimensional statistical recovery ⋮ Accuracy guaranties for \(\ell_{1}\) recovery of block-sparse signals ⋮ The distribution of the Lasso: uniform control over sparse balls and adaptive parameter tuning ⋮ Control variate selection for Monte Carlo integration ⋮ Analysis of generalized Bregman surrogate algorithms for nonsmooth nonconvex statistical learning ⋮ High-dimensional inference for linear model with correlated errors ⋮ The finite sample properties of sparse M-estimators with pseudo-observations ⋮ In defense of the indefensible: a very naïve approach to high-dimensional inference ⋮ Robust subset selection ⋮ Adaptive estimation of the baseline hazard function in the Cox model by model selection, with high-dimensional covariates
Cites Work
- Unnamed Item
- The Adaptive Lasso and Its Oracle Properties
- The Dantzig selector and sparsity oracle inequalities
- Some sharp performance bounds for least squares regression with \(L_1\) regularization
- Near-ideal model selection by \(\ell _{1}\) minimization
- Sparsity in penalized empirical risk minimization
- The sparsity and bias of the LASSO selection in high-dimensional linear regression
- Lasso-type recovery of sparse representations for high-dimensional data
- Simultaneous analysis of Lasso and Dantzig selector
- High-dimensional generalized linear models and the lasso
- Sparsity oracle inequalities for the Lasso
- Sup-norm convergence rate and sign concentration property of Lasso and Dantzig estimators
- Aggregation for Gaussian regression
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- High-dimensional graphs and variable selection with the Lasso
- Extreme Eigenvalues of Toeplitz Forms and Applications to Elliptic Difference Equations
- Decoding by Linear Programming
- Shifting Inequality and Recovery of Sparse Signals
- Sharp Thresholds for High-Dimensional and Noisy Sparsity Recovery Using $\ell _{1}$-Constrained Quadratic Programming (Lasso)
- On Recovery of Sparse Signals Via $\ell _{1}$ Minimization
- Stable Recovery of Sparse Signals and an Oracle Inequality
- Sparse Density Estimation with ℓ1 Penalties
This page was built for publication: On the conditions used to prove oracle results for the Lasso