Least squares after model selection in high-dimensional sparse models

From MaRDI portal
Publication:1952433


DOI10.3150/11-BEJ410zbMath1456.62066arXiv1001.0188MaRDI QIDQ1952433

Victor Chernozhukov, Alexandre Belloni

Publication date: 30 May 2013

Published in: Bernoulli (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/1001.0188


62G08: Nonparametric regression and quantile regression

62J07: Ridge regression; shrinkage estimators (Lasso)

62G20: Asymptotic properties of nonparametric inference


Related Items

Unnamed Item, A penalized approach to covariate selection through quantile regression coefficient models, Unnamed Item, Unnamed Item, Unnamed Item, Unnamed Item, Using Machine Learning Methods to Support Causal Inference in Econometrics, Rates of convergence of the adaptive elastic net and the post-selection procedure in ultra-high dimensional sparse models, Transaction cost analytics for corporate bonds, Communication-efficient estimation of high-dimensional quantile regression, Inference robust to outliers with 1-norm penalization, On the Use of the Lasso for Instrumental Variables Estimation with Some Invalid Instruments, AUTOMATED ESTIMATION OF VECTOR ERROR CORRECTION MODELS, CLEAR: Covariant LEAst-Square Refitting with Applications to Image Restoration, Automatic Component Selection in Additive Modeling of French National Electricity Load Forecasting, The Risk of James–Stein and Lasso Shrinkage, Lassoing the Determinants of Retirement, Optimal model averaging for divergent-dimensional Poisson regressions, A Critical Review of LASSO and Its Derivatives for Variable Selection Under Dependence Among Covariates, Inference on heterogeneous treatment effects in high‐dimensional dynamic panels under weak dependence, Inference for High-Dimensional Exchangeable Arrays, BAYESIAN DYNAMIC VARIABLE SELECTION IN HIGH DIMENSIONS, Time-varying forecast combination for high-dimensional data, Sequential change point detection for high‐dimensional data using nonconvex penalized quantile regression, Testing stochastic dominance with many conditioning variables, Directed graphs and variable selection in large vector autoregressive models, UNIFORM-IN-SUBMODEL BOUNDS FOR LINEAR REGRESSION IN A MODEL-FREE FRAMEWORK, A latent class Cox model for heterogeneous time-to-event data, Estimation and inference of treatment effects with \(L_2\)-boosting in high-dimensional settings, Inference for low-rank models, A dynamic screening algorithm for hierarchical binary marketing data, LASSO for Stochastic Frontier Models with Many Efficient Firms, When are Google Data Useful to Nowcast GDP? An Approach via Preselection and Shrinkage, On asymptotically optimal confidence regions and tests for high-dimensional models, Double Machine Learning for Partially Linear Mixed-Effects Models with Repeated Measurements, Regularizing Double Machine Learning in Partially Linear Endogenous Models, An Automated Approach Towards Sparse Single-Equation Cointegration Modelling, A Projection Based Conditional Dependence Measure with Applications to High-dimensional Undirected Graphical Models, Variable selection and prediction with incomplete high-dimensional data, On estimation of the diagonal elements of a sparse precision matrix, Model selection criteria for a linear model to solve discrete ill-posed problems on the basis of singular decomposition and random projection, Asymptotic properties of Lasso+mLS and Lasso+Ridge in sparse high-dimensional linear regression, Robust inference on average treatment effects with possibly more covariates than observations, Additive model selection, Beyond support in two-stage variable selection, Generalized M-estimators for high-dimensional Tobit I models, On cross-validated Lasso in high dimensions, Statistical inference in sparse high-dimensional additive models, Lasso-driven inference in time and space, The effect of regularization in portfolio selection problems, Shrinkage estimation of dynamic panel data models with interactive fixed effects, Nonconvex penalized reduced rank regression and its oracle properties in high dimensions, Finite mixture regression: a sparse variable selection by model selection for clustering, Sparse linear models and \(l_1\)-regularized 2SLS with high-dimensional endogenous regressors and instruments, Complete subset regressions with large-dimensional sets of predictors, Inference for biased transformation models, Prediction with a flexible finite mixture-of-regressions, Optimal bounds for aggregation of affine estimators, I-LAMM for sparse learning: simultaneous control of algorithmic complexity and statistical error, Least squares after model selection in high-dimensional sparse models, ROCKET: robust confidence intervals via Kendall's tau for transelliptical graphical models, Uniformly valid post-regularization confidence regions for many functional parameters in z-estimation framework, Debiasing the Lasso: optimal sample size for Gaussian designs, Time-dependent Poisson reduced rank models for political text data analysis, Parametric and semiparametric reduced-rank regression with flexible sparsity, Block-based refitting in \(\ell_{12}\) sparse regularization, Necessary and sufficient conditions for variable selection consistency of the Lasso in high dimensions, High-dimensional variable selection via low-dimensional adaptive learning, Control variate selection for Monte Carlo integration, Comparing six shrinkage estimators with large sample theory and asymptotically optimal prediction intervals, Inference for high-dimensional varying-coefficient quantile regression, In defense of the indefensible: a very naïve approach to high-dimensional inference, Network differential connectivity analysis, A comment on Hansen's risk of James-Stein and Lasso shrinkage, Information criteria bias correction for group selection, Lower and upper bound estimates of inequality of opportunity for emerging economies, Recent advances in statistical methodologies in evaluating program for high-dimensional data, De-biasing the Lasso with degrees-of-freedom adjustment, Random weighting in LASSO regression, Fast rates of minimum error entropy with heavy-tailed noise, Statistical inference for model parameters in stochastic gradient descent, Inference for high-dimensional instrumental variables regression, Pivotal estimation via square-root lasso in nonparametric regression, Sorted concave penalized regression, Variable selection with spatially autoregressive errors: a generalized moments Lasso estimator, Regularization methods for high-dimensional sparse control function models, Double machine learning with gradient boosting and its application to the Big \(N\) audit quality effect, Robust measurement via a fused latent and graphical item response theory model, On Lasso refitting strategies, Projected spline estimation of the nonparametric function in high-dimensional partially linear models for massive data, Gaussian approximations and multiplier bootstrap for maxima of sums of high-dimensional random vectors, Endogeneity in high dimensions, Does data splitting improve prediction?, SONIC: social network analysis with influencers and communities, An integrated precision matrix estimation for multivariate regression problems, Recovery of partly sparse and dense signals, Multiple structural breaks in cointegrating regressions: a model selection approach, On the sparsity of Lasso minimizers in sparse data recovery, Debiased Inference on Treatment Effect in a High-Dimensional Model, An alternative to synthetic control for models with many covariates under sparsity



Cites Work