Distributed statistical optimization for non-randomly stored big data with application to penalized learning
From MaRDI portal
Publication:6172933
DOI10.1007/s11222-023-10247-xzbMath1516.62030MaRDI QIDQ6172933
Publication date: 20 July 2023
Published in: Statistics and Computing (Search for Journal in Brave)
Computational methods for problems pertaining to statistics (62-08) Linear regression; mixed models (62J05) Statistical aspects of big data and data science (62R07)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Nearly unbiased variable selection under minimax concave penalty
- A partially linear framework for massive heterogeneous data
- Aggregated estimating equation estimation
- One-step sparse estimates in nonconcave penalized likelihood models
- Distributed testing and estimation under sparse high dimensional models
- A distributed one-step estimator
- Nonconcave penalized likelihood with a diverging number of parameters.
- Robust distributed modal regression for massive data
- Distributed estimation of principal eigenspaces
- Quantile regression under memory constraint
- Quantile regression in big data: a divide and conquer based strategy
- Divide and Conquer Kernel Ridge Regression: A Distributed Algorithm with Minimax Optimal Rates
- A split-and-conquer approach for analysis of
- Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties
- Communication-Efficient Distributed Statistical Inference
- Model Selection and Estimation in Regression with Grouped Variables
- Communication-Efficient Accurate Statistical Estimation
This page was built for publication: Distributed statistical optimization for non-randomly stored big data with application to penalized learning