Model selection for regularized least-squares algorithm in learning theory
From MaRDI portal
Publication:812379
DOI10.1007/s10208-004-0134-1zbMath1083.68106OpenAlexW2099210314WikidataQ60700507 ScholiaQ60700507MaRDI QIDQ812379
E. Vito, Andrea Caponnetto, Lorenzo Rosasco
Publication date: 23 January 2006
Published in: Foundations of Computational Mathematics (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10208-004-0134-1
Learning and adaptive systems in artificial intelligence (68T05) Coding and information theory (compaction, compression, models of communication, encoding schemes, etc.) (aspects in computer science) (68P30)
Related Items (72)
State-based confidence bounds for data-driven stochastic reachability using Hilbert space embeddings ⋮ Fast rates by transferring from auxiliary hypotheses ⋮ LOCAL LEARNING ESTIMATES BY INTEGRAL OPERATORS ⋮ LEAST SQUARE REGRESSION WITH COEFFICIENT REGULARIZATION BY GRADIENT DESCENT ⋮ Least-squares regularized regression with dependent samples andq-penalty ⋮ Geometry on probability spaces ⋮ Nonparametric stochastic approximation with large step-sizes ⋮ Efficiency of classification methods based on empirical risk minimization ⋮ Fully online classification by regularization ⋮ Hermite learning with gradient data ⋮ Fast rates of minimum error entropy with heavy-tailed noise ⋮ Learning rate of distribution regression with dependent samples ⋮ Learning with coefficient-based regularization and \(\ell^1\)-penalty ⋮ Least squares regression with \(l_1\)-regularizer in sum space ⋮ Consistency of learning algorithms using Attouch–Wets convergence ⋮ Optimal learning rates for least squares regularized regression with unbounded sampling ⋮ Learning rates for the kernel regularized regression with a differentiable strongly convex loss ⋮ Just interpolate: kernel ``ridgeless regression can generalize ⋮ Convergence of the forward-backward algorithm: beyond the worst-case with the help of geometry ⋮ Spectral Algorithms for Supervised Learning ⋮ ERM learning with unbounded sampling ⋮ Concentration estimates for learning with unbounded sampling ⋮ Mercer's theorem on general domains: on the interaction between measures, kernels, and RKHSs ⋮ Nonasymptotic analysis of robust regression with modified Huber's loss ⋮ Optimal regression rates for SVMs using Gaussian kernels ⋮ Multi-output learning via spectral filtering ⋮ A STUDY ON THE ERROR OF DISTRIBUTED ALGORITHMS FOR BIG DATA CLASSIFICATION WITH SVM ⋮ Conditional quantiles with varying Gaussians ⋮ Learning with Convex Loss and Indefinite Kernels ⋮ Online Pairwise Learning Algorithms ⋮ Convergence analysis of an empirical eigenfunction-based ranking algorithm with truncated sparsity ⋮ Distributed learning and distribution regression of coefficient regularization ⋮ Reproducing kernel Hilbert spaces associated with analytic translation-invariant Mercer kernels ⋮ Derivative reproducing properties for kernel methods in learning theory ⋮ Learning rates for kernel-based expectile regression ⋮ Sharp estimates for eigenvalues of integral operators generated by dot product kernels on the sphere ⋮ Unregularized online learning algorithms with general loss functions ⋮ CROSS-VALIDATION BASED ADAPTATION FOR REGULARIZATION OPERATORS IN LEARNING THEORY ⋮ Parzen windows for multi-class classification ⋮ Learning and approximation by Gaussians on Riemannian manifolds ⋮ Convergence rates of learning algorithms by random projection ⋮ Learning gradients by a gradient descent algorithm ⋮ Support vector machines regression with \(l^1\)-regularizer ⋮ On empirical eigenfunction-based ranking with \(\ell^1\) norm regularization ⋮ Learning from non-identical sampling for classification ⋮ Concentration estimates for learning with \(\ell ^{1}\)-regularizer and data dependent hypothesis spaces ⋮ Learning rates of multi-kernel regularized regression ⋮ Coefficient-based regression with non-identical unbounded sampling ⋮ Least-square regularized regression with non-iid sampling ⋮ Rademacher Chaos Complexities for Learning the Kernel Problem ⋮ Balancing principle in supervised learning for a general regularization scheme ⋮ Some properties of Gaussian reproducing kernel Hilbert spaces and their implications for function approximation and learning theory ⋮ Error analysis of multicategory support vector machine classifiers ⋮ ONLINE LEARNING WITH MARKOV SAMPLING ⋮ Online regularized pairwise learning with least squares loss ⋮ Analysis of support vector machines regression ⋮ SVM LEARNING AND Lp APPROXIMATION BY GAUSSIANS ON RIEMANNIAN MANIFOLDS ⋮ A NOTE ON STABILITY OF ERROR BOUNDS IN STATISTICAL LEARNING THEORY ⋮ Harder, Better, Faster, Stronger Convergence Rates for Least-Squares Regression ⋮ An elementary analysis of ridge regression with random design ⋮ Unnamed Item ⋮ High order Parzen windows and randomized sampling ⋮ Online Classification with Varying Gaussians ⋮ Exact minimax risk for linear least squares, and the lower tail of sample covariance matrices ⋮ Generalization performance of Gaussian kernels SVMC based on Markov sampling ⋮ Unnamed Item ⋮ Functional linear regression with Huber loss ⋮ Unnamed Item ⋮ Unnamed Item ⋮ Shannon sampling. II: Connections to learning theory ⋮ Thresholded spectral algorithms for sparse approximations ⋮ Regularization: From Inverse Problems to Large-Scale Machine Learning
This page was built for publication: Model selection for regularized least-squares algorithm in learning theory