Two optimal strategies for active learning of causal models from interventional data
From MaRDI portal
Publication:2440180
Abstract: From observational data alone, a causal DAG is only identifiable up to Markov equivalence. Interventional data generally improves identifiability; however, the gain of an intervention strongly depends on the intervention target, that is, the intervened variables. We present active learning (that is, optimal experimental design) strategies calculating optimal interventions for two different learning goals. The first one is a greedy approach using single-vertex interventions that maximizes the number of edges that can be oriented after each intervention. The second one yields in polynomial time a minimum set of targets of arbitrary size that guarantees full identifiability. This second approach proves a conjecture of Eberhardt (2008) indicating the number of unbounded intervention targets which is sufficient and in the worst case necessary for full identifiability. In a simulation study, we compare our two active learning approaches to random interventions and an existing approach, and analyze the influence of estimation errors on the overall performance of active learning.
Recommendations
- Active learning of causal networks with intervention experiments and optimal designs
- Improved baselines for causal structure learning on interventional data
- Active learning of continuous-time Bayesian networks through interventions*
- Toward optimal probabilistic active learning using a Bayesian approach
- scientific article; zbMATH DE number 1149423
- Learning Causal Bayesian Networks from Incomplete Observational Data and Interventions
Cites work
- scientific article; zbMATH DE number 3891425 (Why is no real title available?)
- 10.1162/153244303321897717
- A characterization of Markov equivalence classes for acyclic digraphs
- Active learning
- Active learning of causal networks with intervention experiments and optimal designs
- Algorithmic Aspects of Vertex Elimination on Graphs
- Causal diagrams for empirical research
- Characterization and greedy learning of interventional Markov equivalence classes of directed acyclic graphs
- Estimating high-dimensional directed acyclic graphs with the PC-algorithm
- Learning Causal Bayesian Networks from Observations and Experiments: A Decision Theoretic Approach
- Nonparametric Estimation from Incomplete Observations
- On separating systems of graphs
- Triangulated graphs and the elimination process
Cited in
(10)- Sound and complete causal identification with latent variables given local background knowledge
- Active learning of causal networks with intervention experiments and optimal designs
- Combinatorial and algebraic perspectives on the marginal independence structure of Bayesian networks
- Bayesian sample size determination for causal discovery
- Characterization and greedy learning of interventional Markov equivalence classes of directed acyclic graphs
- scientific article; zbMATH DE number 7387623 (Why is no real title available?)
- Fast causal orientation learning in directed acyclic graphs
- Marginal integration for nonparametric causal inference
- Causal structure learning: a combinatorial perspective
- Experiment selection for causal discovery
This page was built for publication: Two optimal strategies for active learning of causal models from interventional data
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2440180)