scientific article; zbMATH DE number 7306894
From MaRDI portal
zbMath1475.68268arXiv1702.07254MaRDI QIDQ5148996
Publication date: 5 February 2021
Full work available at URL: https://arxiv.org/abs/1702.07254
Title: zbMATH Open Web Interface contents unavailable due to conflicting licenses.
uniform convergencestatistical learning theoryleast-squares regressionlearning ratesinterpolation normsregularized kernel methods
Nonparametric regression and quantile regression (62G08) Learning and adaptive systems in artificial intelligence (68T05)
Related Items
Efficient kernel-based variable selection with sparsistency, Ivanov-Regularised Least-Squares Estimators over Large RKHSs and Their Interpolation Spaces, Generalization error rates in kernel regression: the crossover from the noiseless to noisy regime*, Unnamed Item, On the K-functional in learning theory, Unnamed Item, Unnamed Item, Structure learning via unstructured kernel-based M-estimation, Sketching with Spherical Designs for Noisy Data Fitting on Spheres, Recovery of a Time-Dependent Bottom Topography Function from the Shallow Water Equations via an Adjoint Approach, Error analysis of the kernel regularized regression based on refined convex losses and RKBSs
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Mercer's theorem on general domains: on the interaction between measures, kernels, and RKHSs
- Kernel ridge vs. principal component regression: minimax bounds and the qualification of regularization operators
- Optimal rates for regularization of statistical inverse learning problems
- Learning rates for kernel-based expectile regression
- Model selection for regularized least-squares algorithm in learning theory
- Regularization in kernel learning
- On regularization algorithms in learning theory
- Probability theory. Translated from the German by Robert B. Burckel
- A distribution-free theory of nonparametric regression
- Optimal regression rates for SVMs using Gaussian kernels
- Optimal rates for spectral algorithms with least-squares regression over Hilbert spaces
- Optimal rates for the regularized least-squares algorithm
- On some extensions of Bernstein's inequality for self-adjoint operators
- Shannon sampling. II: Connections to learning theory
- Learning theory estimates via integral operators and their approximations
- DISCRETIZATION ERROR ANALYSIS FOR TIKHONOV REGULARIZATION
- Support Vector Machines
- Remarks on Inequalities for Large Deviation Probabilities
- Shannon sampling and function reconstruction from point values
- An Introduction to Matrix Concentration Inequalities