Model selection for regularized least-squares algorithm in learning theory
From MaRDI portal
Recommendations
Cited in
(82)- Error estimation and model selection (Diss., TU Berlin)
- State-based confidence bounds for data-driven stochastic reachability using Hilbert space embeddings
- Algorithmic Learning Theory
- A note on stability of error bounds in statistical learning theory
- Error analysis of multicategory support vector machine classifiers
- On empirical eigenfunction-based ranking with \(\ell^1\) norm regularization
- scientific article; zbMATH DE number 1928797 (Why is no real title available?)
- Fast rates by transferring from auxiliary hypotheses
- Learning rate of distribution regression with dependent samples
- Distributed robust regression with correntropy losses and regularization kernel networks
- Functional linear regression with Huber loss
- Nonasymptotic analysis of robust regression with modified Huber's loss
- Fast rates of minimum error entropy with heavy-tailed noise
- A study on the error of distributed algorithms for big data classification with SVM
- Derivative reproducing properties for kernel methods in learning theory
- Sharp estimates for eigenvalues of integral operators generated by dot product kernels on the sphere
- Nonparametric stochastic approximation with large step-sizes
- Spectral Algorithms for Supervised Learning
- Rademacher Chaos Complexities for Learning the Kernel Problem
- The Loss Rank Principle for Model Selection
- Learning gradients by a gradient descent algorithm
- Concentration estimates for learning with \(\ell ^{1}\)-regularizer and data dependent hypothesis spaces
- Some properties of Gaussian reproducing kernel Hilbert spaces and their implications for function approximation and learning theory
- Least-squares regularized regression with dependent samples and \(q\)-penalty
- Thresholded spectral algorithms for sparse approximations
- Least-square regularized regression with non-iid sampling
- Learning and approximation by Gaussians on Riemannian manifolds
- Convergence of the forward-backward algorithm: beyond the worst-case with the help of geometry
- Near-ideal model selection by \(\ell _{1}\) minimization
- Sobolev norm learning rates for regularized least-squares algorithms
- Consistency of learning algorithms using Attouch-Wets convergence
- Concentration estimates for learning with unbounded sampling
- Cross-validation based adaptation for regularization operators in learning theory
- Conditional quantiles with varying Gaussians
- Convergence rates of learning algorithms by random projection
- Margin-adaptive model selection in statistical learning
- Convergence analysis of an empirical eigenfunction-based ranking algorithm with truncated sparsity
- Online regularized pairwise learning with least squares loss
- Optimal learning rates for least squares regularized regression with unbounded sampling
- Harder, Better, Faster, Stronger Convergence Rates for Least-Squares Regression
- Least square regression with coefficient regularization by gradient descent
- Regularization: From Inverse Problems to Large-Scale Machine Learning
- Generalization performance of Gaussian kernels SVMC based on Markov sampling
- scientific article; zbMATH DE number 6500982 (Why is no real title available?)
- Multi-output learning via spectral filtering
- Learning theory of distributed regression with bias corrected regularization kernel network
- Fully online classification by regularization
- Analysis of support vector machines regression
- Exact minimax risk for linear least squares, and the lower tail of sample covariance matrices
- Learning rates for kernel-based expectile regression
- scientific article; zbMATH DE number 7415114 (Why is no real title available?)
- Just interpolate: kernel ``ridgeless regression can generalize
- Shannon sampling. II: Connections to learning theory
- Iterative feature selection in least square regression estimation
- Local learning estimates by integral operators
- Efficiency of classification methods based on empirical risk minimization
- SVM LEARNING AND Lp APPROXIMATION BY GAUSSIANS ON RIEMANNIAN MANIFOLDS
- Hermite learning with gradient data
- Distributed learning with regularized least squares
- Coefficient-based regression with non-identical unbounded sampling
- Balancing principle in supervised learning for a general regularization scheme
- An elementary analysis of ridge regression with random design
- Support vector machines regression with \(l^1\)-regularizer
- Learning with coefficient-based regularization and \(\ell^1\)-penalty
- Learning rates of multi-kernel regularized regression
- Learning from non-identical sampling for classification
- Unregularized online learning algorithms with general loss functions
- Geometry on probability spaces
- Least squares regression with \(l_1\)-regularizer in sum space
- Parzen windows for multi-class classification
- Reproducing kernel Hilbert spaces associated with analytic translation-invariant Mercer kernels
- Optimal regression rates for SVMs using Gaussian kernels
- Online Pairwise Learning Algorithms
- Mercer's theorem on general domains: on the interaction between measures, kernels, and RKHSs
- ERM learning with unbounded sampling
- Best choices for regularization parameters in learning theory: on the bias-variance problem.
- Learning rates for the kernel regularized regression with a differentiable strongly convex loss
- High order Parzen windows and randomized sampling
- ONLINE LEARNING WITH MARKOV SAMPLING
- Learning with convex loss and indefinite kernels
- Distributed learning and distribution regression of coefficient regularization
- Online classification with varying Gaussians
This page was built for publication: Model selection for regularized least-squares algorithm in learning theory
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q812379)