Learning theory of distributed spectral algorithms
DOI10.1088/1361-6420/aa72b2zbMath1372.65162OpenAlexW2613940844MaRDI QIDQ5348011
Ding-Xuan Zhou, Zheng-Chu Guo, Shao-Bo Lin
Publication date: 11 August 2017
Published in: Inverse Problems (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1088/1361-6420/aa72b2
inverse operatorerror boundreproducing kernel Hilbert spacespectral algorithmdistributed learninglearning ratedivide-and-conquer approach
Linear regression; mixed models (62J05) Learning and adaptive systems in artificial intelligence (68T05) Integral operators (47G10) Hilbert spaces with reproducing kernels (= (proper) functional Hilbert spaces, including de Branges-Rovnyak and other structured spaces) (46E22) Numerical solution to inverse problems in abstract spaces (65J22)
Related Items
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Random design analysis of ridge regression
- On regularization algorithms in learning theory
- The covering number in learning theory
- Optimal rates for the regularized least-squares algorithm
- Shannon sampling. II: Connections to learning theory
- Learning theory estimates via integral operators and their approximations
- On early stopping in gradient descent learning
- Divide and Conquer Kernel Ridge Regression: A Distributed Algorithm with Minimax Optimal Rates
- Learning Theory
- Spectral Algorithms for Supervised Learning
- CROSS-VALIDATION BASED ADAPTATION FOR REGULARIZATION OPERATORS IN LEARNING THEORY
- Generalization performance of regularization networks and support vector machines via entropy numbers of compact operators
- Shannon sampling and function reconstruction from point values
- Regularization schemes for minimum error entropy principle