Pages that link to "Item:Q812379"
From MaRDI portal
The following pages link to Model selection for regularized least-squares algorithm in learning theory (Q812379):
Displayed 50 items.
- Nonparametric stochastic approximation with large step-sizes (Q309706) (← links)
- Learning with coefficient-based regularization and \(\ell^1\)-penalty (Q380980) (← links)
- Least squares regression with \(l_1\)-regularizer in sum space (Q390496) (← links)
- Mercer's theorem on general domains: on the interaction between measures, kernels, and RKHSs (Q431161) (← links)
- Multi-output learning via spectral filtering (Q439000) (← links)
- Unregularized online learning algorithms with general loss functions (Q504379) (← links)
- Learning from non-identical sampling for classification (Q541601) (← links)
- Concentration estimates for learning with \(\ell ^{1}\)-regularizer and data dependent hypothesis spaces (Q550498) (← links)
- Optimal learning rates for least squares regularized regression with unbounded sampling (Q617656) (← links)
- Learning rates for kernel-based expectile regression (Q669274) (← links)
- Geometry on probability spaces (Q843724) (← links)
- Efficiency of classification methods based on empirical risk minimization (Q844356) (← links)
- Hermite learning with gradient data (Q848563) (← links)
- Reproducing kernel Hilbert spaces associated with analytic translation-invariant Mercer kernels (Q939089) (← links)
- Derivative reproducing properties for kernel methods in learning theory (Q939547) (← links)
- Parzen windows for multi-class classification (Q958247) (← links)
- Learning and approximation by Gaussians on Riemannian manifolds (Q960002) (← links)
- Learning rates of multi-kernel regularized regression (Q974504) (← links)
- Analysis of support vector machines regression (Q1022433) (← links)
- High order Parzen windows and randomized sampling (Q1047130) (← links)
- Convergence analysis of an empirical eigenfunction-based ranking algorithm with truncated sparsity (Q1722329) (← links)
- Support vector machines regression with \(l^1\)-regularizer (Q1759352) (← links)
- ERM learning with unbounded sampling (Q1943018) (← links)
- Concentration estimates for learning with unbounded sampling (Q1946480) (← links)
- Optimal regression rates for SVMs using Gaussian kernels (Q1951100) (← links)
- Conditional quantiles with varying Gaussians (Q1955538) (← links)
- Coefficient-based regression with non-identical unbounded sampling (Q2016624) (← links)
- An elementary analysis of ridge regression with random design (Q2080945) (← links)
- Exact minimax risk for linear least squares, and the lower tail of sample covariance matrices (Q2091833) (← links)
- Functional linear regression with Huber loss (Q2099272) (← links)
- State-based confidence bounds for data-driven stochastic reachability using Hilbert space embeddings (Q2123214) (← links)
- Fast rates of minimum error entropy with heavy-tailed noise (Q2168008) (← links)
- Learning rate of distribution regression with dependent samples (Q2171946) (← links)
- Learning rates for the kernel regularized regression with a differentiable strongly convex loss (Q2191832) (← links)
- Just interpolate: kernel ``ridgeless'' regression can generalize (Q2196223) (← links)
- Distributed learning and distribution regression of coefficient regularization (Q2223571) (← links)
- Sharp estimates for eigenvalues of integral operators generated by dot product kernels on the sphere (Q2238038) (← links)
- Convergence rates of learning algorithms by random projection (Q2252501) (← links)
- On empirical eigenfunction-based ranking with \(\ell^1\) norm regularization (Q2256621) (← links)
- Least-square regularized regression with non-iid sampling (Q2272113) (← links)
- Balancing principle in supervised learning for a general regularization scheme (Q2278452) (← links)
- Generalization performance of Gaussian kernels SVMC based on Markov sampling (Q2339390) (← links)
- Fast rates by transferring from auxiliary hypotheses (Q2361574) (← links)
- Fully online classification by regularization (Q2381648) (← links)
- Learning gradients by a gradient descent algorithm (Q2480334) (← links)
- Shannon sampling. II: Connections to learning theory (Q2581447) (← links)
- Convergence of the forward-backward algorithm: beyond the worst-case with the help of geometry (Q2687067) (← links)
- Nonasymptotic analysis of robust regression with modified Huber's loss (Q2693696) (← links)
- LEAST SQUARE REGRESSION WITH COEFFICIENT REGULARIZATION BY GRADIENT DESCENT (Q2893483) (← links)
- Least-squares regularized regression with dependent samples and<i>q</i>-penalty (Q2903163) (← links)