Spectral Algorithms for Supervised Learning
From MaRDI portal
Publication:3510946
DOI10.1162/neco.2008.05-07-517zbMath1147.68643OpenAlexW2109668081WikidataQ51894056 ScholiaQ51894056MaRDI QIDQ3510946
Alessandro Verri, L. Lo Gerfo, Francesca Odone, Lorenzo Rosasco, Ernesto De Vito
Publication date: 3 July 2008
Published in: Neural Computation (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1162/neco.2008.05-07-517
Related Items (35)
Coefficient regularized regression with non-iid sampling ⋮ Multi-penalty regularization in learning theory ⋮ Distributed spectral pairwise ranking algorithms ⋮ Optimal learning rates for kernel partial least squares ⋮ Unnamed Item ⋮ Unnamed Item ⋮ Regularized least square regression with unbounded and dependent sampling ⋮ On spectral windows in supervised learning from data ⋮ Convergence of regularization methods with filter functions for a regularization parameter chosen with GSURE and mildly ill-posed inverse problems ⋮ Learning theory of distributed spectral algorithms ⋮ Least square regression with indefinite kernels and coefficient regularization ⋮ Spectral algorithms for learning with dependent observations ⋮ Convex regularization in statistical inverse learning problems ⋮ An empirical feature-based learning algorithm producing sparse approximations ⋮ Efficient kernel canonical correlation analysis using Nyström approximation ⋮ A neural network algorithm to pattern recognition in inverse problems ⋮ Consistency analysis of spectral regularization algorithms ⋮ Multi-output learning via spectral filtering ⋮ Convergence rate of kernel canonical correlation analysis ⋮ Kernel regression, minimax rates and effective dimensionality: Beyond the regular case ⋮ Optimal rates for regularization of statistical inverse learning problems ⋮ Distributed kernel-based gradient descent algorithms ⋮ A Comparative Study of Pairwise Learning Methods Based on Kernel Ridge Regression ⋮ Learning sets with separating kernels ⋮ Unnamed Item ⋮ Balancing principle in supervised learning for a general regularization scheme ⋮ Optimal rates for spectral algorithms with least-squares regression over Hilbert spaces ⋮ Convergence Rates of Spectral Regularization Methods: A Comparison between Ill-Posed Inverse Problems and Statistical Kernel Learning ⋮ On a regularization of unsupervised domain adaptation in RKHS ⋮ Unnamed Item ⋮ Half supervised coefficient regularization for regression learning with unbounded sampling ⋮ Analysis of singular value thresholding algorithm for matrix completion ⋮ Unnamed Item ⋮ Unnamed Item ⋮ Thresholded spectral algorithms for sparse approximations
Cites Work
- Model selection for regularized least-squares algorithm in learning theory
- On regularization algorithms in learning theory
- Regularization networks and support vector machines
- Optimal rates for the regularized least-squares algorithm
- Learning rates of least-square regularized regression
- Shannon sampling. II: Connections to learning theory
- Learning theory estimates via integral operators and their approximations
- On early stopping in gradient descent learning
- DISCRETIZATION ERROR ANALYSIS FOR TIKHONOV REGULARIZATION
- Boosting With theL2Loss
- 10.1162/153244302760200704
- Shannon sampling and function reconstruction from point values
- STABILITY RESULTS IN LEARNING THEORY
- Convexity, Classification, and Risk Bounds
- Theory of Reproducing Kernels
This page was built for publication: Spectral Algorithms for Supervised Learning