Least squares after model selection in high-dimensional sparse models
From MaRDI portal
Publication:1952433
DOI10.3150/11-BEJ410zbMath1456.62066arXiv1001.0188MaRDI QIDQ1952433
Victor Chernozhukov, Alexandre Belloni
Publication date: 30 May 2013
Published in: Bernoulli (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1001.0188
Nonparametric regression and quantile regression (62G08) Ridge regression; shrinkage estimators (Lasso) (62J07) Asymptotic properties of nonparametric inference (62G20)
Related Items (only showing first 100 items - show all)
An alternative to synthetic control for models with many covariates under sparsity ⋮ On cross-validated Lasso in high dimensions ⋮ Statistical inference in sparse high-dimensional additive models ⋮ Lasso-driven inference in time and space ⋮ Automatic Component Selection in Additive Modeling of French National Electricity Load Forecasting ⋮ Lower and upper bound estimates of inequality of opportunity for emerging economies ⋮ The effect of regularization in portfolio selection problems ⋮ Variable selection and prediction with incomplete high-dimensional data ⋮ Recent advances in statistical methodologies in evaluating program for high-dimensional data ⋮ De-biasing the Lasso with degrees-of-freedom adjustment ⋮ On estimation of the diagonal elements of a sparse precision matrix ⋮ Does data splitting improve prediction? ⋮ Model selection criteria for a linear model to solve discrete ill-posed problems on the basis of singular decomposition and random projection ⋮ Random weighting in LASSO regression ⋮ Sparse linear models and \(l_1\)-regularized 2SLS with high-dimensional endogenous regressors and instruments ⋮ Complete subset regressions with large-dimensional sets of predictors ⋮ Inference for biased transformation models ⋮ Unnamed Item ⋮ Rates of convergence of the adaptive elastic net and the post-selection procedure in ultra-high dimensional sparse models ⋮ Fast rates of minimum error entropy with heavy-tailed noise ⋮ A penalized approach to covariate selection through quantile regression coefficient models ⋮ Statistical inference for model parameters in stochastic gradient descent ⋮ Transaction cost analytics for corporate bonds ⋮ Asymptotic properties of Lasso+mLS and Lasso+Ridge in sparse high-dimensional linear regression ⋮ SONIC: social network analysis with influencers and communities ⋮ An integrated precision matrix estimation for multivariate regression problems ⋮ A Critical Review of LASSO and Its Derivatives for Variable Selection Under Dependence Among Covariates ⋮ Inference on heterogeneous treatment effects in high‐dimensional dynamic panels under weak dependence ⋮ Inference for High-Dimensional Exchangeable Arrays ⋮ Inference for high-dimensional instrumental variables regression ⋮ Shrinkage estimation of dynamic panel data models with interactive fixed effects ⋮ BAYESIAN DYNAMIC VARIABLE SELECTION IN HIGH DIMENSIONS ⋮ Time-varying forecast combination for high-dimensional data ⋮ Sequential change point detection for high‐dimensional data using nonconvex penalized quantile regression ⋮ Testing stochastic dominance with many conditioning variables ⋮ Directed graphs and variable selection in large vector autoregressive models ⋮ UNIFORM-IN-SUBMODEL BOUNDS FOR LINEAR REGRESSION IN A MODEL-FREE FRAMEWORK ⋮ A latent class Cox model for heterogeneous time-to-event data ⋮ Nonconvex penalized reduced rank regression and its oracle properties in high dimensions ⋮ Estimation and inference of treatment effects with \(L_2\)-boosting in high-dimensional settings ⋮ Finite mixture regression: a sparse variable selection by model selection for clustering ⋮ Inference for low-rank models ⋮ A dynamic screening algorithm for hierarchical binary marketing data ⋮ LASSO for Stochastic Frontier Models with Many Efficient Firms ⋮ When are Google Data Useful to Nowcast GDP? An Approach via Preselection and Shrinkage ⋮ Recovery of partly sparse and dense signals ⋮ Least squares after model selection in high-dimensional sparse models ⋮ Multiple structural breaks in cointegrating regressions: a model selection approach ⋮ On the sparsity of Lasso minimizers in sparse data recovery ⋮ Gaussian approximations and multiplier bootstrap for maxima of sums of high-dimensional random vectors ⋮ Communication-efficient estimation of high-dimensional quantile regression ⋮ Prediction with a flexible finite mixture-of-regressions ⋮ Inference robust to outliers with ℓ1-norm penalization ⋮ Generalized M-estimators for high-dimensional Tobit I models ⋮ Debiased Inference on Treatment Effect in a High-Dimensional Model ⋮ Robust inference on average treatment effects with possibly more covariates than observations ⋮ Unnamed Item ⋮ Unnamed Item ⋮ Optimal bounds for aggregation of affine estimators ⋮ Additive model selection ⋮ ROCKET: robust confidence intervals via Kendall's tau for transelliptical graphical models ⋮ Uniformly valid post-regularization confidence regions for many functional parameters in z-estimation framework ⋮ Debiasing the Lasso: optimal sample size for Gaussian designs ⋮ I-LAMM for sparse learning: simultaneous control of algorithmic complexity and statistical error ⋮ Beyond support in two-stage variable selection ⋮ Pivotal estimation via square-root lasso in nonparametric regression ⋮ Time-dependent Poisson reduced rank models for political text data analysis ⋮ On asymptotically optimal confidence regions and tests for high-dimensional models ⋮ Parametric and semiparametric reduced-rank regression with flexible sparsity ⋮ Double Machine Learning for Partially Linear Mixed-Effects Models with Repeated Measurements ⋮ Regularizing Double Machine Learning in Partially Linear Endogenous Models ⋮ Block-based refitting in \(\ell_{12}\) sparse regularization ⋮ Sorted concave penalized regression ⋮ Endogeneity in high dimensions ⋮ An Automated Approach Towards Sparse Single-Equation Cointegration Modelling ⋮ Necessary and sufficient conditions for variable selection consistency of the Lasso in high dimensions ⋮ A Projection Based Conditional Dependence Measure with Applications to High-dimensional Undirected Graphical Models ⋮ High-dimensional variable selection via low-dimensional adaptive learning ⋮ Variable selection with spatially autoregressive errors: a generalized moments Lasso estimator ⋮ Regularization methods for high-dimensional sparse control function models ⋮ Double machine learning with gradient boosting and its application to the Big \(N\) audit quality effect ⋮ Control variate selection for Monte Carlo integration ⋮ Comparing six shrinkage estimators with large sample theory and asymptotically optimal prediction intervals ⋮ Robust measurement via a fused latent and graphical item response theory model ⋮ Inference for high-dimensional varying-coefficient quantile regression ⋮ In defense of the indefensible: a very naïve approach to high-dimensional inference ⋮ On Lasso refitting strategies ⋮ On the Use of the Lasso for Instrumental Variables Estimation with Some Invalid Instruments ⋮ Network differential connectivity analysis ⋮ Projected spline estimation of the nonparametric function in high-dimensional partially linear models for massive data ⋮ Unnamed Item ⋮ A comment on Hansen's risk of James-Stein and Lasso shrinkage ⋮ Information criteria bias correction for group selection ⋮ Unnamed Item ⋮ AUTOMATED ESTIMATION OF VECTOR ERROR CORRECTION MODELS ⋮ Using Machine Learning Methods to Support Causal Inference in Econometrics ⋮ The Risk of James–Stein and Lasso Shrinkage ⋮ Lassoing the Determinants of Retirement ⋮ Optimal model averaging for divergent-dimensional Poisson regressions ⋮ CLEAR: Covariant LEAst-Square Refitting with Applications to Image Restoration
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Sparse recovery under matrix uncertainty
- Oracle inequalities and optimal inference under group sparsity
- Sparsity in penalized empirical risk minimization
- The sparsity and bias of the LASSO selection in high-dimensional linear regression
- Lasso-type recovery of sparse representations for high-dimensional data
- Nonparametric curve estimation. Methods, theory, and applications
- Weak convergence and empirical processes. With applications to statistics
- Least squares after model selection in high-dimensional sparse models
- Simultaneous analysis of Lasso and Dantzig selector
- High-dimensional generalized linear models and the lasso
- Sparsity oracle inequalities for the Lasso
- Sup-norm convergence rate and sign concentration property of Lasso and Dantzig estimators
- \(\ell_1\)-penalized quantile regression in high-dimensional sparse models
- Aggregation for Gaussian regression
- The Dantzig selector: statistical estimation when \(p\) is much larger than \(n\). (With discussions and rejoinder).
- On sparse reconstruction from Fourier and Gaussian measurements
- Sharp Thresholds for High-Dimensional and Noisy Sparsity Recovery Using $\ell _{1}$-Constrained Quadratic Programming (Lasso)
- Aggregation and Sparsity Via ℓ1 Penalized Least Squares
- All of Nonparametric Statistics
- Introduction to nonparametric estimation
This page was built for publication: Least squares after model selection in high-dimensional sparse models