Balancing principle in supervised learning for a general regularization scheme
From MaRDI portal
(Redirected from Publication:2278452)
Recommendations
- Adaptive kernel methods using the balancing principle
- About the balancing principle for choice of the regularization parameter
- Cross-validation based adaptation for regularization operators in learning theory
- Model selection for regularized least-squares algorithm in learning theory
- Best choices for regularization parameters in learning theory: on the bias-variance problem.
Cites work
- Adaptive kernel methods using the balancing principle
- Approximation methods for supervised learning
- Covering numbers of Gaussian reproducing kernel Hilbert spaces
- Cross-validation based adaptation for regularization operators in learning theory
- Discrepancy principle for statistical inverse problems with application to conjugate gradient iteration
- Discretization strategy for linear ill-posed problems in variable Hilbert scales
- Geometry of linear ill-posed problems in variable Hilbert scales
- Learning the kernel function via regularization
- Learning theory estimates via integral operators and their approximations
- Learning theory of distributed spectral algorithms
- Model selection for regularized least-squares algorithm in learning theory
- On early stopping in gradient descent learning
- On regularization algorithms in learning theory
- Optimal rates for the regularized least-squares algorithm
- Oracle-type posterior contraction rates in Bayesian inverse problems
- Radial kernels and their reproducing kernel Hilbert spaces
- Regularization networks and support vector machines
- Regularization of some linear ill-posed problems with discretized random noisy data
- Regularization theory for ill-posed problems. Selected topics
- Spectral Algorithms for Supervised Learning
Cited in
(18)- The Goldenshluger-Lepski method for constrained least-squares estimators over RKHSs
- Kernel regression, minimax rates and effective dimensionality: beyond the regular case
- Spectral algorithms for functional linear regression
- Least squares approximations in linear statistical inverse learning problems
- Error guarantees for least squares approximation with noisy samples in domain adaptation
- A machine learning approach to optimal Tikhonov regularization. I: Affine manifolds
- Nonlinear Tikhonov regularization in Hilbert scales for inverse learning
- A statistical learning assessment of Huber regression
- Optimality of robust online learning
- Adaptive parameter selection for kernel ridge regression
- On a regularization of unsupervised domain adaptation in RKHS
- Convergence analysis of Tikhonov regularization for non-linear statistical inverse problems
- Nonasymptotic analysis of robust regression with modified Huber's loss
- Inverse learning in Hilbert scales
- Tikhonov regularization with oversmoothing penalty for nonlinear statistical inverse problems
- Analysis of regularized least squares for functional linear regression model
- Distributed spectral pairwise ranking algorithms
- Convex regularization in statistical inverse learning problems
This page was built for publication: Balancing principle in supervised learning for a general regularization scheme
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2278452)