Post-selection inference via algorithmic stability
From MaRDI portal
Publication:6183754
Abstract: When the target of statistical inference is chosen in a data-driven manner, the guarantees provided by classical theories vanish. We propose a solution to the problem of inference after selection by building on the framework of algorithmic stability, in particular its branch with origins in the field of differential privacy. Stability is achieved via randomization of selection and it serves as a quantitative measure that is sufficient to obtain non-trivial post-selection corrections for classical confidence intervals. Importantly, the underpinnings of algorithmic stability translate directly into computational efficiency -- our method computes simple corrections for selective inference without recourse to Markov chain Monte Carlo sampling.
Recommendations
Cites work
- scientific article; zbMATH DE number 1380425 (Why is no real title available?)
- scientific article; zbMATH DE number 845714 (Why is no real title available?)
- 10.1162/153244302760200704
- 10.1162/153244303322753616
- A significance test for the lasso
- Algorithmic stability for adaptive data analysis
- Bootstrapping and sample splitting for high-dimensional, assumption-lean inference
- Bounds on the sample complexity for private learning and private data release
- Conditional predictive inference for stable algorithms
- Exact post-selection inference, with application to the Lasso
- Inferactive data analysis
- Models as approximations. I. Consequences illustrated with linear regression
- On the length of post-model-selection confidence intervals conditional on polyhedral constraints
- Predictive inference with the jackknife+
- Preserving statistical validity in adaptive data analysis (extended abstract)
- Selective inference with a randomized response
- Simultaneous and selective inference: Current successes and future challenges
- Splitting strategies for post-selection inference
- Statistical learning and selective inference
- Sure independence screening for ultrahigh dimensional feature space. With discussion and authors' reply
- The algorithmic foundations of differential privacy
- The reusable holdout: preserving validity in adaptive data analysis
- Theory of Cryptography
- Uniformly valid confidence intervals post-model-selection
- Valid post-selection inference
- Valid post-selection inference in model-free linear regression
- What can we learn privately?
Cited in
(3)
This page was built for publication: Post-selection inference via algorithmic stability
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6183754)