Balancing principle in supervised learning for a general regularization scheme
DOI10.1016/J.ACHA.2018.03.001OpenAlexW2792024940WikidataQ130092327 ScholiaQ130092327MaRDI QIDQ2278452FDOQ2278452
Authors: Shuai Lu, Peter Mathé, Sergei V. Pereverzyev
Publication date: 5 December 2019
Published in: Applied and Computational Harmonic Analysis (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.acha.2018.03.001
Recommendations
- Adaptive kernel methods using the balancing principle
- About the balancing principle for choice of the regularization parameter
- Cross-validation based adaptation for regularization operators in learning theory
- Model selection for regularized least-squares algorithm in learning theory
- Best choices for regularization parameters in learning theory: on the bias-variance problem.
Learning and adaptive systems in artificial intelligence (68T05) Computational learning theory (68Q32) Algorithms for approximation of functions (65D15)
Cites Work
- Regularization networks and support vector machines
- On early stopping in gradient descent learning
- Optimal rates for the regularized least-squares algorithm
- Oracle-type posterior contraction rates in Bayesian inverse problems
- Discrepancy principle for statistical inverse problems with application to conjugate gradient iteration
- Regularization of some linear ill-posed problems with discretized random noisy data
- Regularization theory for ill-posed problems. Selected topics
- Geometry of linear ill-posed problems in variable Hilbert scales
- Approximation methods for supervised learning
- Learning the kernel function via regularization
- Learning theory estimates via integral operators and their approximations
- Adaptive kernel methods using the balancing principle
- Spectral Algorithms for Supervised Learning
- Cross-validation based adaptation for regularization operators in learning theory
- Model selection for regularized least-squares algorithm in learning theory
- On regularization algorithms in learning theory
- Discretization strategy for linear ill-posed problems in variable Hilbert scales
- Covering numbers of Gaussian reproducing kernel Hilbert spaces
- Radial kernels and their reproducing kernel Hilbert spaces
- Learning theory of distributed spectral algorithms
Cited In (18)
- The Goldenshluger-Lepski method for constrained least-squares estimators over RKHSs
- Kernel regression, minimax rates and effective dimensionality: beyond the regular case
- Spectral algorithms for functional linear regression
- Least squares approximations in linear statistical inverse learning problems
- Error guarantees for least squares approximation with noisy samples in domain adaptation
- A machine learning approach to optimal Tikhonov regularization. I: Affine manifolds
- Nonlinear Tikhonov regularization in Hilbert scales for inverse learning
- A statistical learning assessment of Huber regression
- Optimality of robust online learning
- Adaptive parameter selection for kernel ridge regression
- On a regularization of unsupervised domain adaptation in RKHS
- Convergence analysis of Tikhonov regularization for non-linear statistical inverse problems
- Nonasymptotic analysis of robust regression with modified Huber's loss
- Inverse learning in Hilbert scales
- Tikhonov regularization with oversmoothing penalty for nonlinear statistical inverse problems
- Distributed spectral pairwise ranking algorithms
- Convex regularization in statistical inverse learning problems
- Analysis of regularized least squares for functional linear regression model
This page was built for publication: Balancing principle in supervised learning for a general regularization scheme
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2278452)