On regression adjustments in experiments with several treatments

From MaRDI portal
Publication:2482974

DOI10.1214/07-AOAS143zbMath1144.62027arXiv0803.3757OpenAlexW2047354426MaRDI QIDQ2482974

David Freedman

Publication date: 30 April 2008

Published in: The Annals of Applied Statistics (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/0803.3757




Related Items

Inference after covariate-adaptive randomisation: aspects of methodology and theoryRejoinder on ‘Inference after covariate-adaptive randomization: aspects of methodology and theory’A causal bootstrapCovariate adjustment in randomization-based causal inference for \(2^K\) factorial designsStatistical inference of heterogeneous treatment effect based on single-index modelSampling‐based Randomised Designs for Causal Inference under the Potential Outcomes FrameworkThe Generalized Oaxaca-Blinder EstimatorRandomization-based test for censored outcomes: a new look at the logrank testToward Better Practice of Covariate Adjustment in Analyzing Randomized Clinical TrialsRandomization does not justify logistic regressionOn equivalencies between design-based and regression-based variance estimators for randomized experimentsRegression-adjusted estimation of quantile treatment effects under covariate-adaptive randomizationsLasso adjustments of treatment effect estimates in randomized experimentsLeveraging population outcomes to improve the generalization of experimental results: application to the JTPA studyA unified analysis of regression adjustment in randomized experimentsAgnostic notes on regression adjustments to experimental data: reexamining Freedman's critiqueRegression adjustment for treatment effect with multicollinearity in high dimensionsSharp bounds on the variance in randomized experimentsRandomization-based causal inference from split-plot designsUsing Standard Tools From Finite Population Sampling to Improve Causal Inference for Complex ExperimentsRevisiting regression adjustment in experiments with heterogeneous treatment effectsThe Perils of Balance Testing in Experimental Design: Messy Analyses of Clean Data



Cites Work