Pages that link to "Item:Q3560100"
From MaRDI portal
The following pages link to CROSS-VALIDATION BASED ADAPTATION FOR REGULARIZATION OPERATORS IN LEARNING THEORY (Q3560100):
Displayed 41 items.
- Multi-penalty regularization in learning theory (Q306697) (← links)
- An empirical feature-based learning algorithm producing sparse approximations (Q413648) (← links)
- Convergence rate of kernel canonical correlation analysis (Q659987) (← links)
- Optimal rates for regularization of statistical inverse learning problems (Q667648) (← links)
- Fast cross-validation in harmonic approximation (Q778015) (← links)
- Optimal learning rates for kernel partial least squares (Q1645280) (← links)
- A linear functional strategy for regularized ranking (Q1669294) (← links)
- Distributed kernel-based gradient descent algorithms (Q1745365) (← links)
- Adaptive kernel methods using the balancing principle (Q1959089) (← links)
- Kernel conjugate gradient methods with random projections (Q1979923) (← links)
- The Goldenshluger-Lepski method for constrained least-squares estimators over RKHSs (Q1983602) (← links)
- Stability analysis of learning algorithms for ontology similarity computation (Q2016684) (← links)
- Distributed regularized least squares with flexible Gaussian kernels (Q2036424) (← links)
- A statistical learning assessment of Huber regression (Q2054280) (← links)
- On a regularization of unsupervised domain adaptation in RKHS (Q2075006) (← links)
- Kernel gradient descent algorithm for information theoretic learning (Q2223567) (← links)
- Distributed learning and distribution regression of coefficient regularization (Q2223571) (← links)
- Learning sets with separating kernels (Q2252512) (← links)
- Balancing principle in supervised learning for a general regularization scheme (Q2278452) (← links)
- Moving quantile regression (Q2301045) (← links)
- Multi-task learning via linear functional strategy (Q2407408) (← links)
- Distributed learning with multi-penalty regularization (Q2415399) (← links)
- Convergence rates of Kernel Conjugate Gradient for random design regression (Q2835985) (← links)
- INDEFINITE KERNEL NETWORK WITH DEPENDENT SAMPLING (Q2855474) (← links)
- A NOTE ON STABILITY OF ERROR BOUNDS IN STATISTICAL LEARNING THEORY (Q3096969) (← links)
- Analysis of Regression Algorithms with Unbounded Sampling (Q3386411) (← links)
- (Q4633060) (← links)
- (Q4637006) (← links)
- Optimal Rates for Multi-pass Stochastic Gradient Methods (Q4637012) (← links)
- Nyström subsampling method for coefficient-based regularized regression (Q4968314) (← links)
- Regularized Nyström subsampling in regression and ranking problems under general smoothness assumptions (Q4968723) (← links)
- (Q4969055) (← links)
- (Q4969211) (← links)
- (Q4998897) (← links)
- (Q4998979) (← links)
- Regularization: From Inverse Problems to Large-Scale Machine Learning (Q5028166) (← links)
- (Q5148925) (← links)
- Convergence analysis of distributed multi-penalty regularized pairwise learning (Q5220068) (← links)
- Thresholded spectral algorithms for sparse approximations (Q5267950) (← links)
- Learning theory of distributed spectral algorithms (Q5348011) (← links)
- Noise Level Free Regularization of General Linear Inverse Problems under Unconstrained White Noise (Q6164171) (← links)