CROSS-VALIDATION BASED ADAPTATION FOR REGULARIZATION OPERATORS IN LEARNING THEORY
From MaRDI portal
Publication:3560100
DOI10.1142/S0219530510001564zbMath1209.68405OpenAlexW2046493556WikidataQ60700496 ScholiaQ60700496MaRDI QIDQ3560100
Publication date: 19 May 2010
Published in: Analysis and Applications (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1142/s0219530510001564
Learning and adaptive systems in artificial intelligence (68T05) General topics in the theory of data (68P01)
Related Items (41)
Multi-penalty regularization in learning theory ⋮ Optimal learning rates for kernel partial least squares ⋮ Unnamed Item ⋮ Unnamed Item ⋮ A linear functional strategy for regularized ranking ⋮ Multi-task learning via linear functional strategy ⋮ Distributed learning with multi-penalty regularization ⋮ Learning theory of distributed spectral algorithms ⋮ An empirical feature-based learning algorithm producing sparse approximations ⋮ Unnamed Item ⋮ Unnamed Item ⋮ Noise Level Free Regularization of General Linear Inverse Problems under Unconstrained White Noise ⋮ Adaptive kernel methods using the balancing principle ⋮ Kernel gradient descent algorithm for information theoretic learning ⋮ Distributed learning and distribution regression of coefficient regularization ⋮ Convergence rate of kernel canonical correlation analysis ⋮ Optimal rates for regularization of statistical inverse learning problems ⋮ Kernel conjugate gradient methods with random projections ⋮ The Goldenshluger-Lepski method for constrained least-squares estimators over RKHSs ⋮ Distributed kernel-based gradient descent algorithms ⋮ Learning sets with separating kernels ⋮ Stability analysis of learning algorithms for ontology similarity computation ⋮ Balancing principle in supervised learning for a general regularization scheme ⋮ Distributed regularized least squares with flexible Gaussian kernels ⋮ Nyström subsampling method for coefficient-based regularized regression ⋮ Regularized Nyström subsampling in regression and ranking problems under general smoothness assumptions ⋮ Moving quantile regression ⋮ Convergence analysis of distributed multi-penalty regularized pairwise learning ⋮ A statistical learning assessment of Huber regression ⋮ Analysis of Regression Algorithms with Unbounded Sampling ⋮ A NOTE ON STABILITY OF ERROR BOUNDS IN STATISTICAL LEARNING THEORY ⋮ Convergence rates of Kernel Conjugate Gradient for random design regression ⋮ Optimal Rates for Multi-pass Stochastic Gradient Methods ⋮ On a regularization of unsupervised domain adaptation in RKHS ⋮ Fast cross-validation in harmonic approximation ⋮ Unnamed Item ⋮ INDEFINITE KERNEL NETWORK WITH DEPENDENT SAMPLING ⋮ Unnamed Item ⋮ Unnamed Item ⋮ Thresholded spectral algorithms for sparse approximations ⋮ Regularization: From Inverse Problems to Large-Scale Machine Learning
Cites Work
- Asymptotics of cross-validated risk estimation in estimator selection and performance assess\-ment
- Model selection for regularized least-squares algorithm in learning theory
- On regularization algorithms in learning theory
- A distribution-free theory of nonparametric regression
- Sums and Gaussian vectors
- Regularization networks and support vector machines
- Optimal rates for the regularized least-squares algorithm
- Shannon sampling. II: Connections to learning theory
- On the mathematical foundations of learning
- DISCRETIZATION ERROR ANALYSIS FOR TIKHONOV REGULARIZATION
- Remarks on Inequalities for Large Deviation Probabilities
This page was built for publication: CROSS-VALIDATION BASED ADAPTATION FOR REGULARIZATION OPERATORS IN LEARNING THEORY