scientific article; zbMATH DE number 7415114
From MaRDI portal
Publication:5159455
Publication date: 27 October 2021
Full work available at URL: https://arxiv.org/abs/2006.16744
Title: zbMATH Open Web Interface contents unavailable due to conflicting licenses.
Related Items (2)
Error analysis of classification learning algorithms based on LUMs loss ⋮ Optimality of regularized least squares ranking with imperfect kernels
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers
- Optimal rates for regularization of statistical inverse learning problems
- Model selection for regularized least-squares algorithm in learning theory
- Regularization in kernel learning
- On regularization algorithms in learning theory
- A note on application of integral operator in learning theory
- Distributed kernel-based gradient descent algorithms
- Regularization networks and support vector machines
- Distributed kernel gradient descent algorithm for minimum error entropy principle
- Optimal rates for the regularized least-squares algorithm
- Application of integral operator for regularized least-square regression
- Learning rates of least-square regularized regression
- Learning theory estimates via integral operators and their approximations
- Divide and Conquer Kernel Ridge Regression: A Distributed Algorithm with Minimax Optimal Rates
- Learning Theory for Distribution Regression
- 10.1162/153244302760200704
- Leave-One-Out Bounds for Kernel Methods
- WONDER: Weighted one-shot distributed ridge regression in high dimensions
- Distributed learning with indefinite kernels
- Learning theory of distributed spectral algorithms
- Theory of Reproducing Kernels
- Some properties of Gaussian reproducing kernel Hilbert spaces and their implications for function approximation and learning theory
This page was built for publication: