Analyzing the discrepancy principle for kernelized spectral filter learning algorithms
From MaRDI portal
Publication:4998979
Authors: Alain Celisse, Martin Wahl
Publication date: 9 July 2021
Full work available at URL: https://arxiv.org/abs/2004.08436
Recommendations
- Early stopping and non-parametric regression: an optimal data-dependent stopping rule
- On early stopping in gradient descent learning
- A discrepancy-based parameter adaptation and stopping rule for minimization algorithms aiming at Tikhonov-type regularization
- Optimal learning rates for kernel partial least squares
- Discrepancy principle for statistical inverse problems with application to conjugate gradient iteration
reproducing kernel Hilbert spaceoracle inequalityeffective dimensiondiscrepancy principleearly stoppingnon-parametric regressionspectral regularization
Cites Work
- Weak convergence and empirical processes. With applications to statistics
- Title not available (Why is that?)
- Theory of Reproducing Kernels
- Mathematical foundations of infinite-dimensional statistical models
- Support Vector Machines
- High-dimensional probability. An introduction with applications in data science
- Boosting with early stopping: convergence and consistency
- On early stopping in gradient descent learning
- On the mathematical foundations of learning
- Deep learning
- A Technique for the Numerical Solution of Certain Integral Equations of the First Kind
- Boosting With theL2Loss
- Oracle inequalities in empirical risk minimization and sparse recovery problems. École d'Été de Probabilités de Saint-Flour XXXVIII-2008.
- A distribution-free theory of nonparametric regression
- Local Rademacher complexities and oracle inequalities in risk minimization. (2004 IMS Medallion Lecture). (With discussions and rejoinder)
- Local Rademacher complexities
- An introduction to matrix concentration inequalities
- Optimal rates for the regularized least-squares algorithm
- Discrepancy principle for statistical inverse problems with application to conjugate gradient iteration
- Regularization theory for ill-posed problems. Selected topics
- Title not available (Why is that?)
- Title not available (Why is that?)
- Introduction to nonparametric estimation
- Non-asymptotic adaptive prediction in functional linear models
- Neural Network Learning
- Shannon sampling. II: Connections to learning theory
- Learning theory estimates via integral operators and their approximations
- Adaptive kernel methods using the balancing principle
- Early stopping and non-parametric regression: an optimal data-dependent stopping rule
- Learning from examples as an inverse problem
- Cross-validation based adaptation for regularization operators in learning theory
- On regularization algorithms in learning theory
- Variance function estimation in multivariate nonparametric regression with fixed design
- Residual variance estimation using a nearest neighbor statistic
- Boosting methods for regression
- Kernel ridge vs. principal component regression: minimax bounds and the qualification of regularization operators
- The discretized discrepancy principle under general source conditions
- Sobolev norm learning rates for regularized least-squares algorithms
- Optimal rates for regularization of statistical inverse learning problems
- Convergence rates of kernel conjugate gradient for random design regression
- On some extensions of Bernstein's inequality for self-adjoint operators
- Variance estimation for high-dimensional regression models
- A linear functional strategy for regularized ranking
- Optimal adaptation for early stopping in statistical inverse problems
- High-probability bounds for the reconstruction error of PCA
- Potential Functions in Mathematical Pattern Recognition
- Optimal rates for spectral algorithms with least-squares regression over Hilbert spaces
- Early stopping for statistical inverse problems via truncated SVD estimation
- Smoothed residual stopping for statistical inverse problems via truncated SVD estimation
- A nearest neighbor estimate of the residual variance
- Early Stopping for Kernel Boosting Algorithms: A General Analysis With Localized Complexities
Cited In (5)
- Analyzing the discrepancy principle for kernelized spectral filter learning algorithms
- A note on the prediction error of principal component regression in high dimensions
- Towards adaptivity via a new discrepancy principle for Poisson inverse problems
- From inexact optimization to learning via gradient concentration
- Adaptive parameter selection for kernel ridge regression
This page was built for publication: Analyzing the discrepancy principle for kernelized spectral filter learning algorithms
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4998979)