Standardization and control for confounding in observational studies: a historical perspective

From MaRDI portal
Publication:252802

DOI10.1214/13-STS453zbMATH Open1331.62287arXiv1503.02853MaRDI QIDQ252802FDOQ252802

David Clayton, Niels Keiding

Publication date: 4 March 2016

Published in: Statistical Science (Search for Journal in Brave)

Abstract: Control for confounders in observational studies was generally handled through stratification and standardization until the 1960s. Standardization typically reweights the stratum-specific rates so that exposure categories become comparable. With the development first of loglinear models, soon also of nonlinear regression techniques (logistic regression, failure time regression) that the emerging computers could handle, regression modelling became the preferred approach, just as was already the case with multiple regression analysis for continuous outcomes. Since the mid 1990s it has become increasingly obvious that weighting methods are still often useful, sometimes even necessary. On this background we aim at describing the emergence of the modelling approach and the refinement of the weighting approach for confounder control.


Full work available at URL: https://arxiv.org/abs/1503.02853





Cites Work


Cited In (5)


   Recommendations





This page was built for publication: Standardization and control for confounding in observational studies: a historical perspective

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q252802)