Analyzing the discrepancy principle for kernelized spectral filter learning algorithms
From MaRDI portal
Publication:4998979
Recommendations
- Early stopping and non-parametric regression: an optimal data-dependent stopping rule
- On early stopping in gradient descent learning
- A discrepancy-based parameter adaptation and stopping rule for minimization algorithms aiming at Tikhonov-type regularization
- Optimal learning rates for kernel partial least squares
- Discrepancy principle for statistical inverse problems with application to conjugate gradient iteration
Cites work
- scientific article; zbMATH DE number 2107836 (Why is no real title available?)
- scientific article; zbMATH DE number 936298 (Why is no real title available?)
- scientific article; zbMATH DE number 3298300 (Why is no real title available?)
- A Technique for the Numerical Solution of Certain Integral Equations of the First Kind
- A distribution-free theory of nonparametric regression
- A linear functional strategy for regularized ranking
- A nearest neighbor estimate of the residual variance
- Adaptive kernel methods using the balancing principle
- An introduction to matrix concentration inequalities
- Boosting With theL2Loss
- Boosting methods for regression
- Boosting with early stopping: convergence and consistency
- Convergence rates of kernel conjugate gradient for random design regression
- Cross-validation based adaptation for regularization operators in learning theory
- Deep learning
- Discrepancy principle for statistical inverse problems with application to conjugate gradient iteration
- Early Stopping for Kernel Boosting Algorithms: A General Analysis With Localized Complexities
- Early stopping and non-parametric regression: an optimal data-dependent stopping rule
- Early stopping for statistical inverse problems via truncated SVD estimation
- High-dimensional probability. An introduction with applications in data science
- High-probability bounds for the reconstruction error of PCA
- Introduction to nonparametric estimation
- Kernel ridge vs. principal component regression: minimax bounds and the qualification of regularization operators
- Learning from examples as an inverse problem
- Learning theory estimates via integral operators and their approximations
- Local Rademacher complexities
- Local Rademacher complexities and oracle inequalities in risk minimization. (2004 IMS Medallion Lecture). (With discussions and rejoinder)
- Mathematical foundations of infinite-dimensional statistical models
- Neural Network Learning
- Non-asymptotic adaptive prediction in functional linear models
- On early stopping in gradient descent learning
- On regularization algorithms in learning theory
- On some extensions of Bernstein's inequality for self-adjoint operators
- On the mathematical foundations of learning
- Optimal adaptation for early stopping in statistical inverse problems
- Optimal rates for regularization of statistical inverse learning problems
- Optimal rates for spectral algorithms with least-squares regression over Hilbert spaces
- Optimal rates for the regularized least-squares algorithm
- Oracle inequalities in empirical risk minimization and sparse recovery problems. École d'Été de Probabilités de Saint-Flour XXXVIII-2008.
- Potential Functions in Mathematical Pattern Recognition
- Regularization theory for ill-posed problems. Selected topics
- Residual variance estimation using a nearest neighbor statistic
- Shannon sampling. II: Connections to learning theory
- Smoothed residual stopping for statistical inverse problems via truncated SVD estimation
- Sobolev norm learning rates for regularized least-squares algorithms
- Support Vector Machines
- The discretized discrepancy principle under general source conditions
- Theory of Reproducing Kernels
- Variance estimation for high-dimensional regression models
- Variance function estimation in multivariate nonparametric regression with fixed design
- Weak convergence and empirical processes. With applications to statistics
Cited in
(5)- A note on the prediction error of principal component regression in high dimensions
- Towards adaptivity via a new discrepancy principle for Poisson inverse problems
- From inexact optimization to learning via gradient concentration
- Analyzing the discrepancy principle for kernelized spectral filter learning algorithms
- Adaptive parameter selection for kernel ridge regression
This page was built for publication: Analyzing the discrepancy principle for kernelized spectral filter learning algorithms
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4998979)