Optimal rates for spectral algorithms with least-squares regression over Hilbert spaces
DOI10.1016/j.acha.2018.09.009zbMath1436.62146arXiv1801.06720OpenAlexW2895466035MaRDI QIDQ2300763
Lorenzo Rosasco, Alessandro Rudi, Volkan Cevher, Jun Hong Lin
Publication date: 28 February 2020
Published in: Applied and Computational Harmonic Analysis (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1801.06720
learning theoryridge regressionregressionreproducing kernel Hilbert spaceprincipal component regressionregularization schemesampling operator
Nonparametric regression and quantile regression (62G08) Factor analysis and principal components; correspondence analysis (62H25) Ridge regression; shrinkage estimators (Lasso) (62J07) Inference from stochastic processes and spectral analysis (62M15) Learning and adaptive systems in artificial intelligence (68T05) Hilbert spaces with reproducing kernels (= (proper) functional Hilbert spaces, including de Branges-Rovnyak and other structured spaces) (46E22)
Related Items (21)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Random design analysis of ridge regression
- Kernel ridge vs. principal component regression: minimax bounds and the qualification of regularization operators
- On regularization algorithms in learning theory
- Double operator integrals in a Hilbert space
- Optimal rates for the regularized least-squares algorithm
- Learning rates of least-square regularized regression
- Boosting with early stopping: convergence and consistency
- Learning theory estimates via integral operators and their approximations
- On early stopping in gradient descent learning
- Regularization of some linear ill-posed problems with discretized random noisy data
- Learning Theory
- Support Vector Machines
- Spectral Algorithms for Supervised Learning
- Remarks on Inequalities for Large Deviation Probabilities
- Norm Inequalities Equivalent to Heinz Inequality
- Optimal Rates for Multi-pass Stochastic Gradient Methods
- MODULI OF CONTINUITY FOR OPERATOR VALUED FUNCTIONS
- Regularized Nyström subsampling in regression and ranking problems under general smoothness assumptions
- Learning theory of distributed spectral algorithms
- Learning Bounds for Kernel Regression Using Effective Data Dimensionality
This page was built for publication: Optimal rates for spectral algorithms with least-squares regression over Hilbert spaces