Statistical inference in massive datasets by empirical likelihood
From MaRDI portal
Publication:2155010
DOI10.1007/S00180-021-01153-9zbMATH Open1505.62280arXiv2004.08580OpenAlexW3216536477MaRDI QIDQ2155010FDOQ2155010
Shaochen Wang, Xue-Jun Ma, Wang Zhou
Publication date: 15 July 2022
Published in: Computational Statistics (Search for Journal in Brave)
Abstract: In this paper, we propose a new statistical inference method for massive data sets, which is very simple and efficient by combining divide-and-conquer method and empirical likelihood. Compared with two popular methods (the bag of little bootstrap and the subsampled double bootstrap), we make full use of data sets, and reduce the computation burden. Extensive numerical studies and real data analysis demonstrate the effectiveness and flexibility of our proposed method. Furthermore, the asymptotic property of our method is derived.
Full work available at URL: https://arxiv.org/abs/2004.08580
Computational methods for problems pertaining to statistics (62-08) Nonparametric estimation (62G05) Statistical aspects of big data and data science (62R07)
Cites Work
- Empirical likelihood ratio confidence regions
- Empirical likelihood
- Empirical likelihood ratio confidence intervals for a single functional
- Title not available (Why is that?)
- A split-and-conquer approach for analysis of
- A general Bahadur representation of \(M\)-estimators and its application to linear regression with nonstochastic designs
- A Scalable Bootstrap for Massive Data
- Title not available (Why is that?)
- Optimal Subsampling for Large Sample Logistic Regression
- Learning Optimal Personalized Treatment Rules in Consideration of Benefit and Risk: With an Application to Treating Type 2 Diabetes Patients With Insulin Therapies
Cited In (1)
This page was built for publication: Statistical inference in massive datasets by empirical likelihood
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2155010)