Optimal rates for regularization of statistical inverse learning problems
DOI10.1007/S10208-017-9359-7zbMATH Open1412.62042arXiv1604.04054OpenAlexW2963053844MaRDI QIDQ667648FDOQ667648
Authors: Gilles Blanchard, Nicole Mücke
Publication date: 1 March 2019
Published in: Foundations of Computational Mathematics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1604.04054
Recommendations
- Inverse statistical learning
- Convergence Rates of Spectral Regularization Methods: A Comparison between Ill-Posed Inverse Problems and Statistical Kernel Learning
- Optimal rates of convergence for nonparametric statistical inverse problems
- A unified approach to inversion problems in statistics
- Convergence analysis of Tikhonov regularization for non-linear statistical inverse problems
reproducing kernel Hilbert spaceinverse problemstatistical learningminimax convergence ratesspectral regularization
Nonparametric regression and quantile regression (62G08) Asymptotic properties of nonparametric inference (62G20) Computational learning theory (68Q32) Numerical solution to inverse problems in abstract spaces (65J22) Linear operators in reproducing-kernel Hilbert spaces (including de Branges, de Branges-Rovnyak, and other structured spaces) (47B32)
Cites Work
- Support Vector Machines
- Title not available (Why is that?)
- Title not available (Why is that?)
- Introduction to nonparametric estimation
- On early stopping in gradient descent learning
- Statistical consistency of kernel canonical correlation analysis
- Boosting With theL2Loss
- A distribution-free theory of nonparametric regression
- Title not available (Why is that?)
- Optimal rates for the regularized least-squares algorithm
- Title not available (Why is that?)
- Geometry of linear ill-posed problems in variable Hilbert scales
- Title not available (Why is that?)
- Approximation methods for supervised learning
- Title not available (Why is that?)
- Shannon sampling. II: Connections to learning theory
- Minimax fast rates for discriminant analysis with errors in variables
- Learning theory estimates via integral operators and their approximations
- Optimal learning rates for least squares regularized regression with unbounded sampling
- Best choices for regularization parameters in learning theory: on the bias-variance problem.
- Learning from examples as an inverse problem
- Spectral Algorithms for Supervised Learning
- Cross-validation based adaptation for regularization operators in learning theory
- Title not available (Why is that?)
- On regularization algorithms in learning theory
- Convergence Rates of General Regularization Methods for Statistical Inverse Problems and Applications
- Approximation in learning theory
- DISCRETIZATION ERROR ANALYSIS FOR TIKHONOV REGULARIZATION
- Inverse statistical learning
- Regularization in kernel learning
- Fréchet derivatives of the power function
- Convergence rates of kernel conjugate gradient for random design regression
- Convergence Characteristics of Methods of Regularization Estimators for Nonlinear Operator Equations
Cited In (52)
- Adaptive parameter selection for kernel ridge regression
- Radial basis function regularization for linear inverse problems with random noise
- Convergence Rates of Spectral Regularization Methods: A Comparison between Ill-Posed Inverse Problems and Statistical Kernel Learning
- Bayesian frequentist bounds for machine learning and system identification
- Learning particle swarming models from data with Gaussian processes
- Optimal rates of convergence for nonparametric statistical inverse problems
- Error analysis of the kernel regularized regression based on refined convex losses and RKBSs
- Two-Layer Neural Networks with Values in a Banach Space
- Nyström subsampling method for coefficient-based regularized regression
- A note on the prediction error of principal component regression in high dimensions
- Title not available (Why is that?)
- Title not available (Why is that?)
- Kernel regression, minimax rates and effective dimensionality: beyond the regular case
- Title not available (Why is that?)
- Shearlet-based regularization in statistical inverse learning with an application to x-ray tomography
- On the improved rates of convergence for Matérn-type kernel ridge regression with application to calibration of computer models
- Iterative kernel regression with preconditioning
- Spectral algorithms for functional linear regression
- How many neurons do we need? A refined analysis for shallow networks trained with gradient descent
- Learning rates for the kernel regularized regression with a differentiable strongly convex loss
- Construction and Monte Carlo estimation of wavelet frames generated by a reproducing kernel
- Title not available (Why is that?)
- Least squares approximations in linear statistical inverse learning problems
- Optimal learning rates for least squares regularized regression with unbounded sampling
- Convergence analysis of distributed multi-penalty regularized pairwise learning
- An elementary analysis of ridge regression with random design
- Distributed minimum error entropy algorithms
- Convergence Rates of General Regularization Methods for Statistical Inverse Problems and Applications
- Optimal rate of the regularized regression learning algorithm
- Lower bounds for invariant statistical models with applications to principal component analysis
- Sketching with Spherical Designs for Noisy Data Fitting on Spheres
- From inexact optimization to learning via gradient concentration
- Online regularized pairwise learning with least squares loss
- Nonlinear Tikhonov regularization in Hilbert scales for inverse learning
- Regularization: From Inverse Problems to Large-Scale Machine Learning
- The empirical process of residuals from an inverse regression
- Mini-workshop: Mathematical foundations of robust and generalizable learning. Abstracts from the mini-workshop held October 2--8, 2022
- Optimality of robust online learning
- Convergence of regularization methods with filter functions for a regularization parameter chosen with GSURE and mildly ill-posed inverse problems
- Concentration of weakly dependent Banach-valued sums and applications to statistical learning methods
- Sobolev norm learning rates for regularized least-squares algorithms
- Rates of convergence of randomized Kaczmarz algorithms in Hilbert spaces
- Convergence Rates for Learning Linear Operators from Noisy Data
- Convergence analysis of Tikhonov regularization for non-linear statistical inverse problems
- Inverse learning in Hilbert scales
- Optimal indirect estimation for linear inverse problems with discretely sampled functional data
- Tikhonov regularization with oversmoothing penalty for nonlinear statistical inverse problems
- Distributed spectral pairwise ranking algorithms
- Convex regularization in statistical inverse learning problems
- Optimality of regularized least squares ranking with imperfect kernels
- Convergences of regularized algorithms and stochastic gradient methods with random projections
- Kernel conjugate gradient methods with random projections
This page was built for publication: Optimal rates for regularization of statistical inverse learning problems
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q667648)