Optimal learning rates for least squares regularized regression with unbounded sampling
From MaRDI portal
Recommendations
- Learning rates of least-square regularized regression
- Optimal rate of the regularized regression learning algorithm
- Regularized least square regression with unbounded and dependent sampling
- Learning rates for least square regressions with coefficient regularization
- The convergence rate of learning algorithms for least square regression with sample dependent hypothesis spaces
- scientific article; zbMATH DE number 6671876
- Optimal learning rates for distribution regression
- Optimal rates for regularization of statistical inverse learning problems
- Optimal rates for the regularized least-squares algorithm
- Sobolev norm learning rates for regularized least-squares algorithms
Cites work
- A new concentration result for regularized risk minimizers
- Analysis of support vector machine classification
- Capacity of reproducing kernel spaces in learning theory
- Derivative reproducing properties for kernel methods in learning theory
- ESTIMATING THE APPROXIMATION ERROR IN LEARNING THEORY
- Learning Theory
- Learning rates of least-square regularized regression
- Learning theory estimates via integral operators and their approximations
- Leave-One-Out Bounds for Kernel Methods
- Model selection for regularized least-squares algorithm in learning theory
- Multi-kernel regularized classifiers
- ONLINE LEARNING WITH MARKOV SAMPLING
- Optimal rates for the regularized least-squares algorithm
- Probability Inequalities for the Sum of Independent Random Variables
- Regularization in kernel learning
- SVM LEARNING AND Lp APPROXIMATION BY GAUSSIANS ON RIEMANNIAN MANIFOLDS
- Support Vector Machines
- Support vector machine soft margin classifiers: error analysis
Cited in
(50)- Least squares regression under weak moment conditions
- Learning rates of least-square regularized regression
- scientific article; zbMATH DE number 7415083 (Why is no real title available?)
- Nonparametric stochastic approximation with large step-sizes
- Constructive analysis for least squares regression with generalized \(K\)-norm regularization
- Constructive analysis for coefficient regularization regression algorithms
- Optimal learning rates for distribution regression
- Statistical consistency of coefficient-based conditional quantile regression
- On the \(K\)-functional in learning theory
- Consistent identification of Wiener systems: a machine learning viewpoint
- Concentration estimates for learning with unbounded sampling
- Learning interaction kernels in stochastic systems of interacting particles from multiple trajectories
- Convergence analysis of coefficient-based regularization under moment incremental condition
- Error analysis on regularized regression based on the maximum correntropy criterion
- Convergence rate of SVM for kernel-based robust regression
- Optimal learning with Gaussians and correntropy loss
- Quantile regression with \(\ell_1\)-regularization and Gaussian kernels
- Support vector machines regression with unbounded sampling
- Convergence Rates of Spectral Regularization Methods: A Comparison between Ill-Posed Inverse Problems and Statistical Kernel Learning
- Optimal convergence rates of high order Parzen windows with unbounded sampling
- Regularized least square regression with unbounded and dependent sampling
- Integral operator approach to learning theory with unbounded sampling
- scientific article; zbMATH DE number 6671876 (Why is no real title available?)
- Deterministic error bounds for kernel-based learning techniques under bounded noise
- Regularized learning schemes in feature Banach spaces
- On the optimal generalization error for weighted least squares under variable individual supervision times
- Statistical analysis of the moving least-squares method with unbounded sampling
- Optimal rates for the regularized least-squares algorithm
- Optimal rate of the regularized regression learning algorithm
- Error bounds of the invariant statistics in machine learning of ergodic Itô diffusions
- Coefficient-based regression with non-identical unbounded sampling
- Online minimum error entropy algorithm with unbounded sampling
- Bayesian frequentist bounds for machine learning and system identification
- Perturbation of convex risk minimization and its application in differential private learning algorithms
- Optimal rates for regularization of statistical inverse learning problems
- Learning with correntropy-induced losses for regression with mixture of symmetric stable noise
- Online regression with unbounded sampling
- Half supervised coefficient regularization for regression learning with unbounded sampling
- Indefinite kernel network with dependent sampling
- Coefficient-based \(l^q\)-regularized regression with indefinite kernels and unbounded sampling
- ERM learning with unbounded sampling
- Analysis of regularized least-squares in reproducing kernel Kreĭn spaces
- Regression learning with non-identically and non-independently sampling
- Analysis of regression algorithms with unbounded sampling
- Nonasymptotic analysis of robust regression with modified Huber's loss
- Learning rates for the kernel regularized regression with a differentiable strongly convex loss
- Generalization ability of fractional polynomial models
- The Learning Rate of lp -coefficient Regularized Shannon Sampling Algorithm
- System identification using kernel-based regularization: new insights on stability and consistency issues
- The kernel regularized learning algorithm for solving Laplace equation with Dirichlet boundary
This page was built for publication: Optimal learning rates for least squares regularized regression with unbounded sampling
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q617656)