Experimental design issues in big data: the question of bias
From MaRDI portal
Publication:3296456
Abstract: Data can be collected in scientific studies via a controlled experiment or passive observation. Big data is often collected in a passive way, e.g. from social media. In studies of causation great efforts are made to guard against bias and hidden confounders or feedback which can destroy the identification of causation by corrupting or omitting counterfactuals (controls). Various solutions of these problems are discussed, including randomization.
Recommendations
Cites work
- A Basis for the Selection of a Response Surface Design
- A minimax approach to randomization and estimation in survey sampling
- A minimax approach to sample surveys
- Bayesian inference for causal effects: The role of randomization
- Causality. Models, reasoning, and inference
- Generic identifiability of linear structural equation models by ancestor decomposition
- I-robust and D-robust designs on a finite design space
- Information-Based Optimal Subdata Selection for Big Data Linear Regression
- Learning functions and approximate Bayesian computation design: ABCD
- Maximum Entropy Sampling and Optimal Bayesian Experimental Design
- Minimum bias designs with constraints
- Principles of experimental design for big data analysis
- The central role of the propensity score in observational studies for causal effects
Cited in
(2)
This page was built for publication: Experimental design issues in big data: the question of bias
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q3296456)