Optimal rates for regularization of statistical inverse learning problems
From MaRDI portal
reproducing kernel Hilbert spaceinverse problemstatistical learningminimax convergence ratesspectral regularization
Nonparametric regression and quantile regression (62G08) Asymptotic properties of nonparametric inference (62G20) Computational learning theory (68Q32) Numerical solution to inverse problems in abstract spaces (65J22) Linear operators in reproducing-kernel Hilbert spaces (including de Branges, de Branges-Rovnyak, and other structured spaces) (47B32)
Abstract: We consider a statistical inverse learning problem, where we observe the image of a function through a linear operator at i.i.d. random design points , superposed with an additive noise. The distribution of the design points is unknown and can be very general. We analyze simultaneously the direct (estimation of ) and the inverse (estimation of ) learning problems. In this general framework, we obtain strong and weak minimax optimal rates of convergence (as the number of observations grows large) for a large class of spectral regularization methods over regularity classes defined through appropriate source conditions. This improves on or completes previous results obtained in related settings. The optimality of the obtained rates is shown not only in the exponent in but also in the explicit dependency of the constant factor in the variance of the noise and the radius of the source condition set.
Recommendations
- Inverse statistical learning
- Convergence Rates of Spectral Regularization Methods: A Comparison between Ill-Posed Inverse Problems and Statistical Kernel Learning
- Optimal rates of convergence for nonparametric statistical inverse problems
- A unified approach to inversion problems in statistics
- Convergence analysis of Tikhonov regularization for non-linear statistical inverse problems
Cites work
- scientific article; zbMATH DE number 3824308 (Why is no real title available?)
- scientific article; zbMATH DE number 3907465 (Why is no real title available?)
- scientific article; zbMATH DE number 45848 (Why is no real title available?)
- scientific article; zbMATH DE number 3605460 (Why is no real title available?)
- scientific article; zbMATH DE number 4000257 (Why is no real title available?)
- scientific article; zbMATH DE number 936298 (Why is no real title available?)
- scientific article; zbMATH DE number 967931 (Why is no real title available?)
- A distribution-free theory of nonparametric regression
- Approximation in learning theory
- Approximation methods for supervised learning
- Best choices for regularization parameters in learning theory: on the bias-variance problem.
- Boosting With theL2Loss
- Convergence Characteristics of Methods of Regularization Estimators for Nonlinear Operator Equations
- Convergence Rates of General Regularization Methods for Statistical Inverse Problems and Applications
- Convergence rates of kernel conjugate gradient for random design regression
- Cross-validation based adaptation for regularization operators in learning theory
- DISCRETIZATION ERROR ANALYSIS FOR TIKHONOV REGULARIZATION
- Fréchet derivatives of the power function
- Geometry of linear ill-posed problems in variable Hilbert scales
- Introduction to nonparametric estimation
- Inverse statistical learning
- Learning from examples as an inverse problem
- Learning theory estimates via integral operators and their approximations
- Minimax fast rates for discriminant analysis with errors in variables
- On early stopping in gradient descent learning
- On regularization algorithms in learning theory
- Optimal learning rates for least squares regularized regression with unbounded sampling
- Optimal rates for the regularized least-squares algorithm
- Regularization in kernel learning
- Shannon sampling. II: Connections to learning theory
- Spectral Algorithms for Supervised Learning
- Statistical consistency of kernel canonical correlation analysis
- Support Vector Machines
Cited in
(52)- Adaptive parameter selection for kernel ridge regression
- Optimal indirect estimation for linear inverse problems with discretely sampled functional data
- Spectral algorithms for functional linear regression
- How many neurons do we need? A refined analysis for shallow networks trained with gradient descent
- Construction and Monte Carlo estimation of wavelet frames generated by a reproducing kernel
- Radial basis function regularization for linear inverse problems with random noise
- Convergences of regularized algorithms and stochastic gradient methods with random projections
- On the improved rates of convergence for Matérn-type kernel ridge regression with application to calibration of computer models
- Rates of convergence of randomized Kaczmarz algorithms in Hilbert spaces
- Distributed minimum error entropy algorithms
- Optimal rates of convergence for nonparametric statistical inverse problems
- Sobolev norm learning rates for regularized least-squares algorithms
- Mini-workshop: Mathematical foundations of robust and generalizable learning. Abstracts from the mini-workshop held October 2--8, 2022
- Convergence analysis of distributed multi-penalty regularized pairwise learning
- Least squares approximations in linear statistical inverse learning problems
- Nyström subsampling method for coefficient-based regularized regression
- Convergence of regularization methods with filter functions for a regularization parameter chosen with GSURE and mildly ill-posed inverse problems
- Optimality of robust online learning
- Tikhonov regularization with oversmoothing penalty for nonlinear statistical inverse problems
- Online regularized pairwise learning with least squares loss
- Optimal learning rates for least squares regularized regression with unbounded sampling
- Regularization: From Inverse Problems to Large-Scale Machine Learning
- Convergence Rates of Spectral Regularization Methods: A Comparison between Ill-Posed Inverse Problems and Statistical Kernel Learning
- Lower bounds for invariant statistical models with applications to principal component analysis
- Kernel regression, minimax rates and effective dimensionality: beyond the regular case
- A note on the prediction error of principal component regression in high dimensions
- Kernel conjugate gradient methods with random projections
- scientific article; zbMATH DE number 6671876 (Why is no real title available?)
- Distributed spectral pairwise ranking algorithms
- scientific article; zbMATH DE number 7370593 (Why is no real title available?)
- scientific article; zbMATH DE number 7415114 (Why is no real title available?)
- Inverse learning in Hilbert scales
- Shearlet-based regularization in statistical inverse learning with an application to x-ray tomography
- Convex regularization in statistical inverse learning problems
- Nonlinear Tikhonov regularization in Hilbert scales for inverse learning
- Error analysis of the kernel regularized regression based on refined convex losses and RKBSs
- Concentration of weakly dependent Banach-valued sums and applications to statistical learning methods
- Convergence Rates for Learning Linear Operators from Noisy Data
- Convergence analysis of Tikhonov regularization for non-linear statistical inverse problems
- From inexact optimization to learning via gradient concentration
- Optimal rate of the regularized regression learning algorithm
- The empirical process of residuals from an inverse regression
- An elementary analysis of ridge regression with random design
- Learning particle swarming models from data with Gaussian processes
- Bayesian frequentist bounds for machine learning and system identification
- Two-Layer Neural Networks with Values in a Banach Space
- Learning rates for the kernel regularized regression with a differentiable strongly convex loss
- Convergence Rates of General Regularization Methods for Statistical Inverse Problems and Applications
- Sketching with Spherical Designs for Noisy Data Fitting on Spheres
- scientific article; zbMATH DE number 7306853 (Why is no real title available?)
- Optimality of regularized least squares ranking with imperfect kernels
- Iterative kernel regression with preconditioning
This page was built for publication: Optimal rates for regularization of statistical inverse learning problems
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q667648)