Optimal rates for the regularized least-squares algorithm
From MaRDI portal
Publication:2385535
DOI10.1007/s10208-006-0196-8zbMath1129.68058OpenAlexW2012501405WikidataQ60700501 ScholiaQ60700501MaRDI QIDQ2385535
Ernesto De Vito, Andrea Caponnetto
Publication date: 12 October 2007
Published in: Foundations of Computational Mathematics (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10208-006-0196-8
Related Items
Machine learning with kernels for portfolio valuation and risk management, Online gradient descent algorithms for functional data learning, State-based confidence bounds for data-driven stochastic reachability using Hilbert space embeddings, Online regression with unbounded sampling, Construction and Monte Carlo estimation of wavelet frames generated by a reproducing kernel, Non-asymptotic error bound for optimal prediction of function-on-function regression by RKHS approach, Deep learning for inverse problems. Abstracts from the workshop held March 7--13, 2021 (hybrid meeting), Generalization error of random feature and kernel methods: hypercontractivity and kernel matrix concentration, Nonparametric regression using needlet kernels for spherical data, Multi-penalty regularization in learning theory, Generalization properties of doubly stochastic learning algorithms, Nonparametric stochastic approximation with large step-sizes, Regularization in kernel learning, Optimal learning rates for kernel partial least squares, Kernel methods for the approximation of some key quantities of nonlinear systems, On regularization algorithms in learning theory, Fast learning rate of multiple kernel learning: trade-off between sparsity and smoothness, Learning rate of distribution regression with dependent samples, Regularized least square regression with unbounded and dependent sampling, Integral operator approach to learning theory with unbounded sampling, Manifold regularization based on Nyström type subsampling, Distributed kernel gradient descent algorithm for minimum error entropy principle, Multi-task learning via linear functional strategy, Learning rates for least square regressions with coefficient regularization, On randomized trace estimates for indefinite matrices with an application to determinants, Distributed learning with multi-penalty regularization, Optimal learning rates for least squares regularized regression with unbounded sampling, Random design analysis of ridge regression, Learning rates for the kernel regularized regression with a differentiable strongly convex loss, Tikhonov regularization with oversmoothing penalty for nonlinear statistical inverse problems, Convergence analysis of Tikhonov regularization for non-linear statistical inverse problems, An empirical feature-based learning algorithm producing sparse approximations, A meta-learning approach to the regularized learning -- case study: blood glucose prediction, Just interpolate: kernel ``ridgeless regression can generalize, ERM learning with unbounded sampling, Concentration estimates for learning with unbounded sampling, Mercer's theorem on general domains: on the interaction between measures, kernels, and RKHSs, Nonasymptotic analysis of robust regression with modified Huber's loss, Consistency of support vector machines using additive kernels for additive models, Estimating conditional quantiles with the help of the pinball loss, Low-rank kernel approximation of Lyapunov functions using neural networks, Optimal regression rates for SVMs using Gaussian kernels, Multi-output learning via spectral filtering, Estimation of convergence rate for multi-regression learning algorithm, Convergence analysis of online learning algorithm with two-stage step size, Least-squares two-sample test, ERM and RERM are optimal estimators for regression problems when malicious outliers corrupt the labels, Adaptive kernel methods using the balancing principle, Finite-sample analysis of \(M\)-estimators using self-concordance, Kernel gradient descent algorithm for information theoretic learning, Optimal rates for regularization of statistical inverse learning problems, Learning rates for kernel-based expectile regression, Optimal prediction for high-dimensional functional quantile regression in reproducing kernel Hilbert spaces, Kernel conjugate gradient methods with random projections, Finite sample performance of linear least squares estimation, Optimal rate of the regularized regression learning algorithm, Distributed kernel-based gradient descent algorithms, Importance sampling: intrinsic dimension and computational cost, Generalization ability of online pairwise support vector machine, Almost optimal estimates for approximation and learning by radial basis function networks, Optimal convergence rates of high order Parzen windows with unbounded sampling, Optimal learning rates of \(l^p\)-type multiple kernel learning under general conditions, Learning sets with separating kernels, Concentration estimates for learning with \(\ell ^{1}\)-regularizer and data dependent hypothesis spaces, Coefficient-based regression with non-identical unbounded sampling, Least-square regularized regression with non-iid sampling, Balancing principle in supervised learning for a general regularization scheme, Image and video colorization using vector-valued reproducing kernel Hilbert spaces, Statistical analysis of the moving least-squares method with unbounded sampling, Optimal learning rates for distribution regression, Analysis of regularized least squares for functional linear regression model, Additive functional regression in reproducing kernel Hilbert spaces under smoothness condition, Distributed regularized least squares with flexible Gaussian kernels, Linearized two-layers neural networks in high dimension, Convex multi-task feature learning, Optimal rates for spectral algorithms with least-squares regression over Hilbert spaces, Moving quantile regression, Fast and strong convergence of online learning algorithms, High-probability bounds for the reconstruction error of PCA, Elastic-net regularization in learning theory, A Vector-Contraction Inequality for Rademacher Complexities, Asymptotic normality of support vector machine variants and other regularized kernel methods, Convergence rates of Kernel Conjugate Gradient for random design regression, On extension theorems and their connection to universal consistency in machine learning, On nonparametric randomized sketches for kernels with further smoothness, Sparse high-dimensional semi-nonparametric quantile regression in a reproducing kernel Hilbert space, Error bounds of the invariant statistics in machine learning of ergodic Itô diffusions, Concentration of weakly dependent Banach-valued sums and applications to statistical learning methods, An elementary analysis of ridge regression with random design, Scalable Gaussian kernel support vector machines with sublinear training time complexity, Model-based kernel sum rule: kernel Bayesian inference with probabilistic models, Optimal rates for coefficient-based regularized regression, Exact minimax risk for linear least squares, and the lower tail of sample covariance matrices, INDEFINITE KERNEL NETWORK WITH DEPENDENT SAMPLING, Half supervised coefficient regularization for regression learning with unbounded sampling, Functional linear regression with Huber loss, Approximate kernel PCA: computational versus statistical trade-off, A sieve stochastic gradient descent estimator for online nonparametric regression in Sobolev ellipsoids, Distribution-free robust linear regression, Oracle-type posterior contraction rates in Bayesian inverse problems, Mehler’s Formula, Branching Process, and Compositional Kernels of Deep Neural Networks, Deep learning: a statistical viewpoint, Ivanov-Regularised Least-Squares Estimators over Large RKHSs and Their Interpolation Spaces, LOCAL LEARNING ESTIMATES BY INTEGRAL OPERATORS, Learning curves of generic features maps for realistic datasets with a teacher-student model*, Generalization error rates in kernel regression: the crossover from the noiseless to noisy regime*, Distributed spectral pairwise ranking algorithms, Toward Efficient Ensemble Learning with Structure Constraints: Convergent Algorithms and Applications, Quantum machine learning: a classical perspective, Shearlet-based regularization in statistical inverse learning with an application to x-ray tomography, INFERENCE ON THE REPRODUCING KERNEL HILBERT SPACES, Unnamed Item, Unnamed Item, Unnamed Item, Gradient descent for robust kernel-based regression, Multiple Kernel Learningの学習理論, Nyström type subsampling analyzed as a regularized projection, Learning theory of distributed spectral algorithms, Spectral algorithms for learning with dependent observations, Capacity dependent analysis for functional online learning algorithms, Distributed learning for sketched kernel regression, A prediction model for ranking branch-and-bound procedures for the resource-constrained project scheduling problem, On the K-functional in learning theory, Convergence Rates for Learning Linear Operators from Noisy Data, Convex regularization in statistical inverse learning problems, Decentralized learning over a network with Nyström approximation using SGD, Unnamed Item, Unnamed Item, Optimality of regularized least squares ranking with imperfect kernels, Unnamed Item, Domain Generalization by Functional Regression, Inverse learning in Hilbert scales, Estimates on learning rates for multi-penalty distribution regression, High-Dimensional Analysis of Double Descent for Linear Regression with Random Projections, Spectral Algorithms for Supervised Learning, Measuring Complexity of Learning Schemes Using Hessian-Schatten Total Variation, Optimally tackling covariate shift in RKHS-based nonparametric regression, Benign Overfitting and Noisy Features, Coefficient-based regularized distribution regression, Nonlinear Tikhonov regularization in Hilbert scales for inverse learning, Online regularized learning algorithm for functional data, Learning with Convex Loss and Indefinite Kernels, Faster Kriging: Facing High-Dimensional Simulators, Support vector machines regression with unbounded sampling, Online minimum error entropy algorithm with unbounded sampling, Kernel regression, minimax rates and effective dimensionality: Beyond the regular case, On the Decay Rate of the Singular Values of Bivariate Functions, CROSS-VALIDATION BASED ADAPTATION FOR REGULARIZATION OPERATORS IN LEARNING THEORY, Unnamed Item, Unnamed Item, Unnamed Item, Learning Rates of lq Coefficient Regularization Learning with Gaussian Kernel, Unnamed Item, Unnamed Item, Sampling and Stability, Partially functional linear regression with quadratic regularization, Rademacher Chaos Complexities for Learning the Kernel Problem, Least Square Regression with lp-Coefficient Regularization, Variable Selection for Nonparametric Learning with Power Series Kernels, Nyström subsampling method for coefficient-based regularized regression, Regularized Nyström subsampling in regression and ranking problems under general smoothness assumptions, The Random Feature Model for Input-Output Maps between Banach Spaces, Online regularized pairwise learning with least squares loss, Convergence analysis of distributed multi-penalty regularized pairwise learning, Analysis of Regression Algorithms with Unbounded Sampling, Convergence Rates of Spectral Regularization Methods: A Comparison between Ill-Posed Inverse Problems and Statistical Kernel Learning, Gradient-Based Kernel Dimension Reduction for Regression, A NOTE ON STABILITY OF ERROR BOUNDS IN STATISTICAL LEARNING THEORY, Semi-supervised learning with summary statistics, Distributed learning with indefinite kernels, Harder, Better, Faster, Stronger Convergence Rates for Least-Squares Regression, Optimal Rates for Multi-pass Stochastic Gradient Methods, Kernel partial least squares for stationary data, VECTOR VALUED REPRODUCING KERNEL HILBERT SPACES AND UNIVERSALITY, Multikernel Regression with Sparsity Constraint, Unnamed Item, Unnamed Item, Estimates of learning rates of regularized regression via polyline functions, Optimal learning with Gaussians and correntropy loss, Unnamed Item, Asymptotic learning curves of kernel methods: empirical data versus teacher–student paradigm, Unnamed Item, Unnamed Item, Unnamed Item, Unnamed Item, Unnamed Item, On the Effectiveness of Richardson Extrapolation in Data Science, Distributed least squares prediction for functional linear regression*, Generalisation error in learning with random features and the hidden manifold model*, Error analysis of the kernel regularized regression based on refined convex losses and RKBSs, Comparison theorems on large-margin learning, Thresholded spectral algorithms for sparse approximations, Implicit regularization with strongly convex bias: Stability and acceleration, Regularization: From Inverse Problems to Large-Scale Machine Learning