scientific article; zbMATH DE number 6860823
From MaRDI portal
Publication:4637042
zbMath1435.68260arXiv1708.01960MaRDI QIDQ4637042
Qiang Wu, Zheng-Chu Guo, Lei Shi
Publication date: 17 April 2018
Full work available at URL: https://arxiv.org/abs/1708.01960
Title: zbMATH Open Web Interface contents unavailable due to conflicting licenses.
Nonparametric regression and quantile regression (62G08) Learning and adaptive systems in artificial intelligence (68T05) Distributed algorithms (68W15)
Related Items (22)
WONDER: Weighted one-shot distributed ridge regression in high dimensions ⋮ Distributed spectral pairwise ranking algorithms ⋮ Distributed regression learning with coefficient regularization ⋮ Unnamed Item ⋮ Distributed learning with partial coefficients regularization ⋮ Distributed kernel gradient descent algorithm for minimum error entropy principle ⋮ Averaging versus voting: a comparative study of strategies for distributed classification ⋮ Optimality of regularized least squares ranking with imperfect kernels ⋮ Unnamed Item ⋮ Bias corrected regularization kernel method in ranking ⋮ Robust kernel-based distribution regression ⋮ Optimal learning rates for distribution regression ⋮ Distributed regularized least squares with flexible Gaussian kernels ⋮ Convergence analysis of distributed multi-penalty regularized pairwise learning ⋮ Distributed Generalized Cross-Validation for Divide-and-Conquer Kernel Ridge Regression and Its Asymptotic Optimality ⋮ Semi-supervised learning with summary statistics ⋮ Distributed learning with indefinite kernels ⋮ Debiased magnitude-preserving ranking: learning rate and bias characterization ⋮ Optimal learning with Gaussians and correntropy loss ⋮ Unnamed Item ⋮ Unnamed Item ⋮ Distributed least squares prediction for functional linear regression*
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Statistical significance in high-dimensional linear models
- Model selection for regularized least-squares algorithm in learning theory
- On regularization algorithms in learning theory
- A note on application of integral operator in learning theory
- Regularization networks and support vector machines
- Optimal rates for the regularized least-squares algorithm
- Approximation methods for supervised learning
- Approximation in learning theory
- Learning rates of least-square regularized regression
- Learning theory estimates via integral operators and their approximations
- Divide and Conquer Kernel Ridge Regression: A Distributed Algorithm with Minimax Optimal Rates
- Confidence Intervals and Hypothesis Testing for High-Dimensional Regression
- A new concentration result for regularized risk minimizers
- On the optimality of averaging in distributed statistical learning
- 10.1162/153244302760200704
- Leave-One-Out Bounds for Kernel Methods
- Thresholded spectral algorithms for sparse approximations
- Learning theory of distributed spectral algorithms
- Confidence Intervals for Low Dimensional Parameters in High Dimensional Linear Models
- Theory of Reproducing Kernels
This page was built for publication: