Kernel ridge vs. principal component regression: minimax bounds and the qualification of regularization operators
DOI10.1214/17-EJS1258zbMATH Open1362.62087arXiv1605.08839OpenAlexW2597846060MaRDI QIDQ521337FDOQ521337
Authors: Lee H. Dicker, Dean P. Foster, Daniel Hsu
Publication date: 7 April 2017
Published in: Electronic Journal of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1605.08839
Recommendations
- Kernel regression, minimax rates and effective dimensionality: beyond the regular case
- A risk comparison of ordinary least squares vs ridge regression
- On the improved rates of convergence for Matérn-type kernel ridge regression with application to calibration of computer models
- Kernel ridge regression
- Model selection in kernel ridge regression
Nonparametric regression and quantile regression (62G08) Factor analysis and principal components; correspondence analysis (62H25) Ridge regression; shrinkage estimators (Lasso) (62J07)
Cited In (27)
- Title not available (Why is that?)
- On principal components regression, random projections, and column subsampling
- Kernel ridge regression
- Kernel regression, minimax rates and effective dimensionality: beyond the regular case
- A risk comparison of ordinary least squares vs ridge regression
- Spectrally-truncated kernel ridge regression and its free lunch
- On the improved rates of convergence for Matérn-type kernel ridge regression with application to calibration of computer models
- Parallelizing spectrally regularized kernel algorithms
- Weighted spectral filters for kernel interpolation on spheres: estimates of prediction accuracy for noisy data
- Optimal rates for spectral algorithms with least-squares regression over Hilbert spaces
- Thresholded spectral algorithms for sparse approximations
- Sparse principal component regression via singular value decomposition approach
- Spectral algorithms for learning with dependent observations
- Functional principal subspace sampling for large scale functional data analysis
- Distributed kernel-based gradient descent algorithms
- Title not available (Why is that?)
- Optimal rates for multi-pass stochastic gradient methods
- Kernel partial least squares for stationary data
- Distributed kernel ridge regression with communications
- On the predictive potential of kernel principal components
- Sobolev norm learning rates for regularized least-squares algorithms
- Nonasymptotic analysis of robust regression with modified Huber's loss
- A Comparative Study of Pairwise Learning Methods Based on Kernel Ridge Regression
- Randomized estimation of functional covariance operator via subsampling
- Convergences of regularized algorithms and stochastic gradient methods with random projections
- Kernel conjugate gradient methods with random projections
- Some equivalence relationships of regularized regressions
This page was built for publication: Kernel ridge vs. principal component regression: minimax bounds and the qualification of regularization operators
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q521337)