Optimal rates for regularization of statistical inverse learning problems
From MaRDI portal
(Redirected from Publication:667648)
reproducing kernel Hilbert spaceinverse problemstatistical learningminimax convergence ratesspectral regularization
Nonparametric regression and quantile regression (62G08) Asymptotic properties of nonparametric inference (62G20) Computational learning theory (68Q32) Numerical solution to inverse problems in abstract spaces (65J22) Linear operators in reproducing-kernel Hilbert spaces (including de Branges, de Branges-Rovnyak, and other structured spaces) (47B32)
Abstract: We consider a statistical inverse learning problem, where we observe the image of a function through a linear operator at i.i.d. random design points , superposed with an additive noise. The distribution of the design points is unknown and can be very general. We analyze simultaneously the direct (estimation of ) and the inverse (estimation of ) learning problems. In this general framework, we obtain strong and weak minimax optimal rates of convergence (as the number of observations grows large) for a large class of spectral regularization methods over regularity classes defined through appropriate source conditions. This improves on or completes previous results obtained in related settings. The optimality of the obtained rates is shown not only in the exponent in but also in the explicit dependency of the constant factor in the variance of the noise and the radius of the source condition set.
Recommendations
- Inverse statistical learning
- Convergence Rates of Spectral Regularization Methods: A Comparison between Ill-Posed Inverse Problems and Statistical Kernel Learning
- Optimal rates of convergence for nonparametric statistical inverse problems
- A unified approach to inversion problems in statistics
- Convergence analysis of Tikhonov regularization for non-linear statistical inverse problems
Cites work
- scientific article; zbMATH DE number 3824308 (Why is no real title available?)
- scientific article; zbMATH DE number 3907465 (Why is no real title available?)
- scientific article; zbMATH DE number 45848 (Why is no real title available?)
- scientific article; zbMATH DE number 3605460 (Why is no real title available?)
- scientific article; zbMATH DE number 4000257 (Why is no real title available?)
- scientific article; zbMATH DE number 936298 (Why is no real title available?)
- scientific article; zbMATH DE number 967931 (Why is no real title available?)
- A distribution-free theory of nonparametric regression
- Approximation in learning theory
- Approximation methods for supervised learning
- Best choices for regularization parameters in learning theory: on the bias-variance problem.
- Boosting With theL2Loss
- Convergence Characteristics of Methods of Regularization Estimators for Nonlinear Operator Equations
- Convergence Rates of General Regularization Methods for Statistical Inverse Problems and Applications
- Convergence rates of kernel conjugate gradient for random design regression
- Cross-validation based adaptation for regularization operators in learning theory
- DISCRETIZATION ERROR ANALYSIS FOR TIKHONOV REGULARIZATION
- Fréchet derivatives of the power function
- Geometry of linear ill-posed problems in variable Hilbert scales
- Introduction to nonparametric estimation
- Inverse statistical learning
- Learning from examples as an inverse problem
- Learning theory estimates via integral operators and their approximations
- Minimax fast rates for discriminant analysis with errors in variables
- On early stopping in gradient descent learning
- On regularization algorithms in learning theory
- Optimal learning rates for least squares regularized regression with unbounded sampling
- Optimal rates for the regularized least-squares algorithm
- Regularization in kernel learning
- Shannon sampling. II: Connections to learning theory
- Spectral Algorithms for Supervised Learning
- Statistical consistency of kernel canonical correlation analysis
- Support Vector Machines
Cited in
(52)- Radial basis function regularization for linear inverse problems with random noise
- Convergence Rates of Spectral Regularization Methods: A Comparison between Ill-Posed Inverse Problems and Statistical Kernel Learning
- Bayesian frequentist bounds for machine learning and system identification
- Optimal rates of convergence for nonparametric statistical inverse problems
- Learning particle swarming models from data with Gaussian processes
- Error analysis of the kernel regularized regression based on refined convex losses and RKBSs
- Two-Layer Neural Networks with Values in a Banach Space
- Nyström subsampling method for coefficient-based regularized regression
- scientific article; zbMATH DE number 7370593 (Why is no real title available?)
- scientific article; zbMATH DE number 7415114 (Why is no real title available?)
- A note on the prediction error of principal component regression in high dimensions
- Kernel regression, minimax rates and effective dimensionality: beyond the regular case
- scientific article; zbMATH DE number 6671876 (Why is no real title available?)
- Shearlet-based regularization in statistical inverse learning with an application to x-ray tomography
- On the improved rates of convergence for Matérn-type kernel ridge regression with application to calibration of computer models
- Iterative kernel regression with preconditioning
- Spectral algorithms for functional linear regression
- How many neurons do we need? A refined analysis for shallow networks trained with gradient descent
- Learning rates for the kernel regularized regression with a differentiable strongly convex loss
- Construction and Monte Carlo estimation of wavelet frames generated by a reproducing kernel
- Optimal learning rates for least squares regularized regression with unbounded sampling
- scientific article; zbMATH DE number 7306853 (Why is no real title available?)
- Least squares approximations in linear statistical inverse learning problems
- An elementary analysis of ridge regression with random design
- Convergence analysis of distributed multi-penalty regularized pairwise learning
- Distributed minimum error entropy algorithms
- Convergence Rates of General Regularization Methods for Statistical Inverse Problems and Applications
- Optimal rate of the regularized regression learning algorithm
- Lower bounds for invariant statistical models with applications to principal component analysis
- Sketching with Spherical Designs for Noisy Data Fitting on Spheres
- From inexact optimization to learning via gradient concentration
- Online regularized pairwise learning with least squares loss
- Nonlinear Tikhonov regularization in Hilbert scales for inverse learning
- The empirical process of residuals from an inverse regression
- Regularization: From Inverse Problems to Large-Scale Machine Learning
- Mini-workshop: Mathematical foundations of robust and generalizable learning. Abstracts from the mini-workshop held October 2--8, 2022
- Optimality of robust online learning
- Convergence of regularization methods with filter functions for a regularization parameter chosen with GSURE and mildly ill-posed inverse problems
- Concentration of weakly dependent Banach-valued sums and applications to statistical learning methods
- Adaptive parameter selection for kernel ridge regression
- Sobolev norm learning rates for regularized least-squares algorithms
- Rates of convergence of randomized Kaczmarz algorithms in Hilbert spaces
- Convergence analysis of Tikhonov regularization for non-linear statistical inverse problems
- Convergence Rates for Learning Linear Operators from Noisy Data
- Inverse learning in Hilbert scales
- Tikhonov regularization with oversmoothing penalty for nonlinear statistical inverse problems
- Optimal indirect estimation for linear inverse problems with discretely sampled functional data
- Distributed spectral pairwise ranking algorithms
- Convex regularization in statistical inverse learning problems
- Kernel conjugate gradient methods with random projections
- Convergences of regularized algorithms and stochastic gradient methods with random projections
- Optimality of regularized least squares ranking with imperfect kernels
This page was built for publication: Optimal rates for regularization of statistical inverse learning problems
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q667648)