Balancing principle in supervised learning for a general regularization scheme
From MaRDI portal
Publication:2278452
DOI10.1016/j.acha.2018.03.001OpenAlexW2792024940MaRDI QIDQ2278452
Shuai Lu, Peter Mathé, Sergei V. Pereverzyev
Publication date: 5 December 2019
Published in: Applied and Computational Harmonic Analysis (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.acha.2018.03.001
Computational learning theory (68Q32) Learning and adaptive systems in artificial intelligence (68T05) Algorithms for approximation of functions (65D15)
Related Items
Distributed spectral pairwise ranking algorithms ⋮ A machine learning approach to optimal Tikhonov regularization I: Affine manifolds ⋮ Error guarantees for least squares approximation with noisy samples in domain adaptation ⋮ Tikhonov regularization with oversmoothing penalty for nonlinear statistical inverse problems ⋮ Convergence analysis of Tikhonov regularization for non-linear statistical inverse problems ⋮ Convex regularization in statistical inverse learning problems ⋮ Inverse learning in Hilbert scales ⋮ Nonlinear Tikhonov regularization in Hilbert scales for inverse learning ⋮ Nonasymptotic analysis of robust regression with modified Huber's loss ⋮ Kernel regression, minimax rates and effective dimensionality: Beyond the regular case ⋮ The Goldenshluger-Lepski method for constrained least-squares estimators over RKHSs ⋮ Analysis of regularized least squares for functional linear regression model ⋮ A statistical learning assessment of Huber regression ⋮ On a regularization of unsupervised domain adaptation in RKHS
Cites Work
- Unnamed Item
- Oracle-type posterior contraction rates in Bayesian inverse problems
- Regularization theory for ill-posed problems. Selected topics
- Covering numbers of Gaussian reproducing kernel Hilbert spaces
- Radial kernels and their reproducing kernel Hilbert spaces
- Model selection for regularized least-squares algorithm in learning theory
- On regularization algorithms in learning theory
- Adaptive kernel methods using the balancing principle
- Regularization networks and support vector machines
- Optimal rates for the regularized least-squares algorithm
- Approximation methods for supervised learning
- Learning theory estimates via integral operators and their approximations
- On early stopping in gradient descent learning
- Discrepancy principle for statistical inverse problems with application to conjugate gradient iteration
- Regularization of some linear ill-posed problems with discretized random noisy data
- Spectral Algorithms for Supervised Learning
- CROSS-VALIDATION BASED ADAPTATION FOR REGULARIZATION OPERATORS IN LEARNING THEORY
- Geometry of linear ill-posed problems in variable Hilbert scales
- Discretization strategy for linear ill-posed problems in variable Hilbert scales
- Learning theory of distributed spectral algorithms