Adaptive kernel methods using the balancing principle
From MaRDI portal
Publication:1959089
DOI10.1007/s10208-010-9064-2zbMath1204.68154OpenAlexW2139252659MaRDI QIDQ1959089
Ernesto De Vito, Lorenzo Rosasco, Sergei V. Pereverzyev
Publication date: 6 October 2010
Published in: Foundations of Computational Mathematics (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10208-010-9064-2
Computational learning theory (68Q32) Learning and adaptive systems in artificial intelligence (68T05)
Related Items
Concentration inequalities for cross-validation in scattered data approximation ⋮ Multi-penalty regularization in learning theory ⋮ Optimal learning rates for kernel partial least squares ⋮ Optimal Convergence of the Discrepancy Principle for Polynomially and Exponentially Ill-Posed Operators under White Noise ⋮ A machine learning approach to optimal Tikhonov regularization I: Affine manifolds ⋮ On an unsupervised method for parameter selection for the elastic net ⋮ Gradient descent for robust kernel-based regression ⋮ A linear functional strategy for regularized ranking ⋮ Manifold regularization based on Nyström type subsampling ⋮ Multi-task learning via linear functional strategy ⋮ A Lepskiĭ-type stopping rule for the covariance estimation of multi-dimensional Lévy processes ⋮ On the K-functional in learning theory ⋮ Unnamed Item ⋮ A meta-learning approach to the regularized learning -- case study: blood glucose prediction ⋮ A Study on Regularization for Discrete Inverse Problems with Model-Dependent Noise ⋮ Kernel gradient descent algorithm for information theoretic learning ⋮ Distributed learning and distribution regression of coefficient regularization ⋮ The Goldenshluger-Lepski method for constrained least-squares estimators over RKHSs ⋮ Robust kernel-based distribution regression ⋮ Multi-parameter regularization and its numerical realization ⋮ Balancing principle in supervised learning for a general regularization scheme ⋮ Regularized Nyström subsampling in regression and ranking problems under general smoothness assumptions ⋮ About the Balancing Principle for Choice of the Regularization Parameter ⋮ On a regularization of unsupervised domain adaptation in RKHS ⋮ Fast cross-validation in harmonic approximation ⋮ Unnamed Item ⋮ Error analysis of the kernel regularized regression based on refined convex losses and RKBSs
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Asymptotics of cross-validated risk estimation in estimator selection and performance assess\-ment
- On regularization algorithms in learning theory
- Elastic-net regularization in learning theory
- Risk bounds for model selection via penalization
- On adaptive inverse estimation of linear functional in Hilbert scales
- Complexity regularization via localized random penalties
- Optimal aggregation of classifiers in statistical learning.
- Regularization networks and support vector machines
- Local Rademacher complexities and oracle inequalities in risk minimization. (2004 IMS Medallion Lecture). (With discussions and rejoinder)
- Optimal rates for the regularized least-squares algorithm
- Approximation methods for supervised learning
- Consistency and robustness of kernel-based regression in convex risk minimization
- Learning rates of least-square regularized regression
- Learning theory estimates via integral operators and their approximations
- On early stopping in gradient descent learning
- On the mathematical foundations of learning
- Regularization Algorithms for Learning That Are Equivalent to Multilayer Networks
- Theory of Classification: a Survey of Some Recent Advances
- Learning Theory
- Oracle inequalities for multi-fold cross validation
- CROSS-VALIDATION BASED ADAPTATION FOR REGULARIZATION OPERATORS IN LEARNING THEORY
- Boosting With theL2Loss
- On a Problem of Adaptive Estimation in Gaussian White Noise
- Regularization and Variable Selection Via the Elastic Net
- On the Adaptive Selection of the Parameter in Regularization of Ill-Posed Problems
- The Lepskii principle revisited
- Theory of Reproducing Kernels
- Introduction to nonparametric estimation
- The elements of statistical learning. Data mining, inference, and prediction