Distributed learning with indefinite kernels
From MaRDI portal
Publication:5236752
DOI10.1142/S021953051850032XzbMath1440.68238OpenAlexW2911111822WikidataQ128630379 ScholiaQ128630379MaRDI QIDQ5236752
No author found.
Publication date: 10 October 2019
Published in: Analysis and Applications (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1142/s021953051850032x
indefinite kernelsdistributed learningcoefficient-based regularized regressionmini-max optimal rates
General nonlinear regression (62J02) Learning and adaptive systems in artificial intelligence (68T05) Distributed algorithms (68W15)
Related Items
Online gradient descent algorithms for functional data learning ⋮ Unnamed Item ⋮ Estimates on learning rates for multi-penalty distribution regression ⋮ Coefficient-based regularized distribution regression ⋮ Modeling interactive components by coordinate kernel polynomial models ⋮ Semi-supervised learning with summary statistics ⋮ Optimal learning with Gaussians and correntropy loss ⋮ Unnamed Item
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Learning with coefficient-based regularization and \(\ell^1\)-penalty
- Mercer's theorem on general domains: on the interaction between measures, kernels, and RKHSs
- Introduction to the peptide binding problem of computational immunology: new results
- Concentration estimates for learning with \(\ell ^{1}\)-regularizer and data dependent hypothesis spaces
- Least square regression with indefinite kernels and coefficient regularization
- On regularization algorithms in learning theory
- Reproducing kernel Banach spaces with the \(\ell^1\) norm
- Regularization networks with indefinite kernels
- Learning theory estimates for coefficient-based regularized regression
- An extension of Mercer theorem to matrix-valued measurable kernels
- Optimal rates for coefficient-based regularized regression
- Optimal rates for the regularized least-squares algorithm
- Learning with sample dependent hypothesis spaces
- On some extensions of Bernstein's inequality for self-adjoint operators
- Distributed learning with multi-penalty regularization
- Shannon sampling. II: Connections to learning theory
- Learning theory estimates via integral operators and their approximations
- On early stopping in gradient descent learning
- Divide and Conquer Kernel Ridge Regression: A Distributed Algorithm with Minimax Optimal Rates
- Convergence rates of Kernel Conjugate Gradient for random design regression
- Learning Theory
- Support Vector Machines
- Mathematical Statistics
- ESTIMATING THE APPROXIMATION ERROR IN LEARNING THEORY
- Learning Rates of lq Coefficient Regularization Learning with Gaussian Kernel
- Thresholded spectral algorithms for sparse approximations
- Learning theory of distributed spectral algorithms
- Learning rates for regularized least squares ranking algorithm
- Indefinite Proximity Learning: A Review
- Kernelized Elastic Net Regularization: Generalization Bounds, and Sparse Recovery
- An Introduction to Matrix Concentration Inequalities
- Theory of Reproducing Kernels
- Scattered Data Approximation