Cross-validation based adaptation for regularization operators in learning theory
From MaRDI portal
Publication:3560100
DOI10.1142/S0219530510001564zbMATH Open1209.68405OpenAlexW2046493556WikidataQ60700496 ScholiaQ60700496MaRDI QIDQ3560100FDOQ3560100
Publication date: 19 May 2010
Published in: Analysis and Applications (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1142/s0219530510001564
Recommendations
- Adaptive kernel methods using the balancing principle
- On regularization algorithms in learning theory
- Best choices for regularization parameters in learning theory: on the bias-variance problem.
- Model selection for regularized least-squares algorithm in learning theory
- Learning rates of least-square regularized regression
Learning and adaptive systems in artificial intelligence (68T05) General topics in the theory of data (68P01)
Cites Work
- Regularization networks and support vector machines
- Remarks on Inequalities for Large Deviation Probabilities
- Asymptotics of cross-validated risk estimation in estimator selection and performance assess\-ment
- On the mathematical foundations of learning
- A distribution-free theory of nonparametric regression
- Optimal rates for the regularized least-squares algorithm
- Shannon sampling. II: Connections to learning theory
- Model selection for regularized least-squares algorithm in learning theory
- On regularization algorithms in learning theory
- Sums and Gaussian vectors
- DISCRETIZATION ERROR ANALYSIS FOR TIKHONOV REGULARIZATION
Cited In (43)
- The Goldenshluger-Lepski method for constrained least-squares estimators over RKHSs
- Title not available (Why is that?)
- Optimal learning rates for kernel partial least squares
- Nyström subsampling method for coefficient-based regularized regression
- Title not available (Why is that?)
- Kernel gradient descent algorithm for information theoretic learning
- Learning theory of distributed spectral algorithms
- An empirical feature-based learning algorithm producing sparse approximations
- Weighted spectral filters for kernel interpolation on spheres: estimates of prediction accuracy for noisy data
- A NOTE ON STABILITY OF ERROR BOUNDS IN STATISTICAL LEARNING THEORY
- Noise Level Free Regularization of General Linear Inverse Problems under Unconstrained White Noise
- Thresholded spectral algorithms for sparse approximations
- Stability analysis of learning algorithms for ontology similarity computation
- Distributed kernel-based gradient descent algorithms
- Title not available (Why is that?)
- Multi-penalty regularization in learning theory
- Distributed learning with multi-penalty regularization
- Indefinite kernel network with dependent sampling
- Convergence analysis of distributed multi-penalty regularized pairwise learning
- Convergence rates of kernel conjugate gradient for random design regression
- Title not available (Why is that?)
- A linear functional strategy for regularized ranking
- Distributed regularized least squares with flexible Gaussian kernels
- Title not available (Why is that?)
- Optimal rates for regularization of statistical inverse learning problems
- Regularized Nyström subsampling in regression and ranking problems under general smoothness assumptions
- Title not available (Why is that?)
- Convergence rate of kernel canonical correlation analysis
- Regularization: From Inverse Problems to Large-Scale Machine Learning
- A statistical learning assessment of Huber regression
- Fast cross-validation in harmonic approximation
- Analysis of Regression Algorithms with Unbounded Sampling
- Balancing principle in supervised learning for a general regularization scheme
- Adaptive kernel methods using the balancing principle
- Multi-task learning via linear functional strategy
- Adaptive parameter selection for kernel ridge regression
- Learning sets with separating kernels
- On a regularization of unsupervised domain adaptation in RKHS
- Moving quantile regression
- Distributed learning and distribution regression of coefficient regularization
- Title not available (Why is that?)
- Optimal Rates for Multi-pass Stochastic Gradient Methods
- Kernel conjugate gradient methods with random projections
This page was built for publication: Cross-validation based adaptation for regularization operators in learning theory
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q3560100)