Statistical inference in massive datasets by empirical likelihood
From MaRDI portal
Publication:2155010
Abstract: In this paper, we propose a new statistical inference method for massive data sets, which is very simple and efficient by combining divide-and-conquer method and empirical likelihood. Compared with two popular methods (the bag of little bootstrap and the subsampled double bootstrap), we make full use of data sets, and reduce the computation burden. Extensive numerical studies and real data analysis demonstrate the effectiveness and flexibility of our proposed method. Furthermore, the asymptotic property of our method is derived.
Recommendations
- Distributed statistical inference for massive data
- A partitioned quasi-likelihood for distributed statistical inference
- Gap bootstrap methods for massive data sets with an application to transportation engineering
- Parallel inference for big data with the group Bayesian method
- High-dimensional empirical likelihood inference
Cites work
- A Scalable Bootstrap for Massive Data
- A general Bahadur representation of \(M\)-estimators and its application to linear regression with nonstochastic designs
- A split-and-conquer approach for analysis of
- A statistical perspective on algorithmic leveraging
- Communication-efficient algorithms for statistical optimization
- Empirical likelihood
- Empirical likelihood ratio confidence intervals for a single functional
- Empirical likelihood ratio confidence regions
- Learning optimal personalized treatment rules in consideration of benefit and risk: with an application to treating type 2 diabetes patients with insulin therapies
- Optimal subsampling for large sample logistic regression
Cited in
(2)
This page was built for publication: Statistical inference in massive datasets by empirical likelihood
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2155010)