Least squares after model selection in high-dimensional sparse models

From MaRDI portal
Publication:1952433

DOI10.3150/11-BEJ410zbMath1456.62066arXiv1001.0188MaRDI QIDQ1952433

Victor Chernozhukov, Alexandre Belloni

Publication date: 30 May 2013

Published in: Bernoulli (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/1001.0188




Related Items (only showing first 100 items - show all)

An alternative to synthetic control for models with many covariates under sparsityOn cross-validated Lasso in high dimensionsStatistical inference in sparse high-dimensional additive modelsLasso-driven inference in time and spaceAutomatic Component Selection in Additive Modeling of French National Electricity Load ForecastingLower and upper bound estimates of inequality of opportunity for emerging economiesThe effect of regularization in portfolio selection problemsVariable selection and prediction with incomplete high-dimensional dataRecent advances in statistical methodologies in evaluating program for high-dimensional dataDe-biasing the Lasso with degrees-of-freedom adjustmentOn estimation of the diagonal elements of a sparse precision matrixDoes data splitting improve prediction?Model selection criteria for a linear model to solve discrete ill-posed problems on the basis of singular decomposition and random projectionRandom weighting in LASSO regressionSparse linear models and \(l_1\)-regularized 2SLS with high-dimensional endogenous regressors and instrumentsComplete subset regressions with large-dimensional sets of predictorsInference for biased transformation modelsUnnamed ItemRates of convergence of the adaptive elastic net and the post-selection procedure in ultra-high dimensional sparse modelsFast rates of minimum error entropy with heavy-tailed noiseA penalized approach to covariate selection through quantile regression coefficient modelsStatistical inference for model parameters in stochastic gradient descentTransaction cost analytics for corporate bondsAsymptotic properties of Lasso+mLS and Lasso+Ridge in sparse high-dimensional linear regressionSONIC: social network analysis with influencers and communitiesAn integrated precision matrix estimation for multivariate regression problemsA Critical Review of LASSO and Its Derivatives for Variable Selection Under Dependence Among CovariatesInference on heterogeneous treatment effects in high‐dimensional dynamic panels under weak dependenceInference for High-Dimensional Exchangeable ArraysInference for high-dimensional instrumental variables regressionShrinkage estimation of dynamic panel data models with interactive fixed effectsBAYESIAN DYNAMIC VARIABLE SELECTION IN HIGH DIMENSIONSTime-varying forecast combination for high-dimensional dataSequential change point detection for high‐dimensional data using nonconvex penalized quantile regressionTesting stochastic dominance with many conditioning variablesDirected graphs and variable selection in large vector autoregressive modelsUNIFORM-IN-SUBMODEL BOUNDS FOR LINEAR REGRESSION IN A MODEL-FREE FRAMEWORKA latent class Cox model for heterogeneous time-to-event dataNonconvex penalized reduced rank regression and its oracle properties in high dimensionsEstimation and inference of treatment effects with \(L_2\)-boosting in high-dimensional settingsFinite mixture regression: a sparse variable selection by model selection for clusteringInference for low-rank modelsA dynamic screening algorithm for hierarchical binary marketing dataLASSO for Stochastic Frontier Models with Many Efficient FirmsWhen are Google Data Useful to Nowcast GDP? An Approach via Preselection and ShrinkageRecovery of partly sparse and dense signalsLeast squares after model selection in high-dimensional sparse modelsMultiple structural breaks in cointegrating regressions: a model selection approachOn the sparsity of Lasso minimizers in sparse data recoveryGaussian approximations and multiplier bootstrap for maxima of sums of high-dimensional random vectorsCommunication-efficient estimation of high-dimensional quantile regressionPrediction with a flexible finite mixture-of-regressionsInference robust to outliers with 1-norm penalizationGeneralized M-estimators for high-dimensional Tobit I modelsDebiased Inference on Treatment Effect in a High-Dimensional ModelRobust inference on average treatment effects with possibly more covariates than observationsUnnamed ItemUnnamed ItemOptimal bounds for aggregation of affine estimatorsAdditive model selectionROCKET: robust confidence intervals via Kendall's tau for transelliptical graphical modelsUniformly valid post-regularization confidence regions for many functional parameters in z-estimation frameworkDebiasing the Lasso: optimal sample size for Gaussian designsI-LAMM for sparse learning: simultaneous control of algorithmic complexity and statistical errorBeyond support in two-stage variable selectionPivotal estimation via square-root lasso in nonparametric regressionTime-dependent Poisson reduced rank models for political text data analysisOn asymptotically optimal confidence regions and tests for high-dimensional modelsParametric and semiparametric reduced-rank regression with flexible sparsityDouble Machine Learning for Partially Linear Mixed-Effects Models with Repeated MeasurementsRegularizing Double Machine Learning in Partially Linear Endogenous ModelsBlock-based refitting in \(\ell_{12}\) sparse regularizationSorted concave penalized regressionEndogeneity in high dimensionsAn Automated Approach Towards Sparse Single-Equation Cointegration ModellingNecessary and sufficient conditions for variable selection consistency of the Lasso in high dimensionsA Projection Based Conditional Dependence Measure with Applications to High-dimensional Undirected Graphical ModelsHigh-dimensional variable selection via low-dimensional adaptive learningVariable selection with spatially autoregressive errors: a generalized moments Lasso estimatorRegularization methods for high-dimensional sparse control function modelsDouble machine learning with gradient boosting and its application to the Big \(N\) audit quality effectControl variate selection for Monte Carlo integrationComparing six shrinkage estimators with large sample theory and asymptotically optimal prediction intervalsRobust measurement via a fused latent and graphical item response theory modelInference for high-dimensional varying-coefficient quantile regressionIn defense of the indefensible: a very naïve approach to high-dimensional inferenceOn Lasso refitting strategiesOn the Use of the Lasso for Instrumental Variables Estimation with Some Invalid InstrumentsNetwork differential connectivity analysisProjected spline estimation of the nonparametric function in high-dimensional partially linear models for massive dataUnnamed ItemA comment on Hansen's risk of James-Stein and Lasso shrinkageInformation criteria bias correction for group selectionUnnamed ItemAUTOMATED ESTIMATION OF VECTOR ERROR CORRECTION MODELSUsing Machine Learning Methods to Support Causal Inference in EconometricsThe Risk of James–Stein and Lasso ShrinkageLassoing the Determinants of RetirementOptimal model averaging for divergent-dimensional Poisson regressionsCLEAR: Covariant LEAst-Square Refitting with Applications to Image Restoration



Cites Work


This page was built for publication: Least squares after model selection in high-dimensional sparse models