On regularization algorithms in learning theory
From MaRDI portal
Publication:870339
DOI10.1016/j.jco.2006.07.001zbMath1109.68088OpenAlexW2123845705MaRDI QIDQ870339
Frank Bauer, Sergei V. Pereverzyev, Lorenzo Rosasco
Publication date: 12 March 2007
Published in: Journal of Complexity (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.jco.2006.07.001
Related Items
Machine learning with kernels for portfolio valuation and risk management, Online gradient descent algorithms for functional data learning, On principal components regression, random projections, and column subsampling, Coefficient regularized regression with non-iid sampling, LOCAL LEARNING ESTIMATES BY INTEGRAL OPERATORS, Multi-penalty regularization in learning theory, Distributed spectral pairwise ranking algorithms, Optimal learning rates for kernel partial least squares, Quantum machine learning: a classical perspective, New horizons in statistical decision theory. Abstracts from the workshop held September 7--13, 2014., Unnamed Item, Unnamed Item, Fast rates of minimum error entropy with heavy-tailed noise, Gradient descent for robust kernel-based regression, A linear functional strategy for regularized ranking, Regularized least square regression with unbounded and dependent sampling, Sharp learning rates of coefficient-based \(l^q\)-regularized regression with indefinite kernels, Manifold regularization based on Nyström type subsampling, Multi-task learning via linear functional strategy, Distributed learning with multi-penalty regularization, Nyström type subsampling analyzed as a regularized projection, Learning theory of distributed spectral algorithms, Least square regression with indefinite kernels and coefficient regularization, Spectral algorithms for learning with dependent observations, Capacity dependent analysis for functional online learning algorithms, Learning rates for the kernel regularized regression with a differentiable strongly convex loss, Convergence analysis of Tikhonov regularization for non-linear statistical inverse problems, On the K-functional in learning theory, Convex regularization in statistical inverse learning problems, Unnamed Item, Unnamed Item, Optimality of regularized least squares ranking with imperfect kernels, Inverse learning in Hilbert scales, A meta-learning approach to the regularized learning -- case study: blood glucose prediction, Efficient kernel canonical correlation analysis using Nyström approximation, Spectral Algorithms for Supervised Learning, Deep learning theory of distribution regression with CNNs, Coefficient-based regularized distribution regression, Online regularized learning algorithm for functional data, A neural network algorithm to pattern recognition in inverse problems, Consistency analysis of spectral regularization algorithms, Multi-output learning via spectral filtering, Bias corrected regularization kernel method in ranking, Adaptive kernel methods using the balancing principle, Convergence analysis of an empirical eigenfunction-based ranking algorithm with truncated sparsity, Kernel gradient descent algorithm for information theoretic learning, Convergence rate of kernel canonical correlation analysis, Kernel regression, minimax rates and effective dimensionality: Beyond the regular case, Optimal rates for regularization of statistical inverse learning problems, Learning rates for kernel-based expectile regression, CROSS-VALIDATION BASED ADAPTATION FOR REGULARIZATION OPERATORS IN LEARNING THEORY, Distributed kernel-based gradient descent algorithms, A Comparative Study of Pairwise Learning Methods Based on Kernel Ridge Regression, Robust kernel-based distribution regression, REGULARIZED LEAST SQUARE ALGORITHM WITH TWO KERNELS, Learning sets with separating kernels, On empirical eigenfunction-based ranking with \(\ell^1\) norm regularization, Multi-parameter regularization and its numerical realization, Sobolev error estimates and a priori parameter selection for semi-discrete Tikhonov regularization, Unnamed Item, Balancing principle in supervised learning for a general regularization scheme, Optimal learning rates for distribution regression, On the convergence rate and some applications of regularized ranking algorithms, The \(\mathrm{r}\)-\(\mathrm{d}\) class predictions in linear mixed models, Nyström subsampling method for coefficient-based regularized regression, Optimal rates for spectral algorithms with least-squares regression over Hilbert spaces, Regularized Nyström subsampling in regression and ranking problems under general smoothness assumptions, A statistical learning assessment of Huber regression, Convergence Rates of Spectral Regularization Methods: A Comparison between Ill-Posed Inverse Problems and Statistical Kernel Learning, Gradient-Based Kernel Dimension Reduction for Regression, Elastic-net regularization in learning theory, A NOTE ON STABILITY OF ERROR BOUNDS IN STATISTICAL LEARNING THEORY, Semi-supervised learning with summary statistics, Distributed learning with indefinite kernels, Optimal Rates for Multi-pass Stochastic Gradient Methods, On a regularization of unsupervised domain adaptation in RKHS, Concentration of weakly dependent Banach-valued sums and applications to statistical learning methods, Unnamed Item, Unnamed Item, Optimal rates for coefficient-based regularized regression, Thresholding projection estimators in functional linear models, Half supervised coefficient regularization for regression learning with unbounded sampling, Analysis of singular value thresholding algorithm for matrix completion, Unnamed Item, Unnamed Item, Unnamed Item, Unnamed Item, Error analysis of the kernel regularized regression based on refined convex losses and RKBSs, Thresholded spectral algorithms for sparse approximations, From inexact optimization to learning via gradient concentration
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Double operator integrals in a Hilbert space
- Optimal aggregation of classifiers in statistical learning.
- Weak convergence and empirical processes. With applications to statistics
- Regularization networks and support vector machines
- Optimal rates for the regularized least-squares algorithm
- Approximation methods for supervised learning
- Shannon sampling. II: Connections to learning theory
- Sous-espaces d'espaces vectoriels topologiques et noyaux associés. (Noyaux reproduisants.)
- On early stopping in gradient descent learning
- On the mathematical foundations of learning
- Theory of Classification: a Survey of Some Recent Advances
- DISCRETIZATION ERROR ANALYSIS FOR TIKHONOV REGULARIZATION
- Remarks on Inequalities for Large Deviation Probabilities
- Geometry of linear ill-posed problems in variable Hilbert scales
- MODULI OF CONTINUITY FOR OPERATOR VALUED FUNCTIONS
- Convergence Rates of General Regularization Methods for Statistical Inverse Problems and Applications
- Advanced Lectures on Machine Learning
- Convexity, Classification, and Risk Bounds
- Theory of Reproducing Kernels