Model selection for regularized least-squares algorithm in learning theory

From MaRDI portal
Publication:812379

DOI10.1007/s10208-004-0134-1zbMath1083.68106OpenAlexW2099210314WikidataQ60700507 ScholiaQ60700507MaRDI QIDQ812379

E. Vito, Andrea Caponnetto, Lorenzo Rosasco

Publication date: 23 January 2006

Published in: Foundations of Computational Mathematics (Search for Journal in Brave)

Full work available at URL: https://doi.org/10.1007/s10208-004-0134-1




Related Items (72)

State-based confidence bounds for data-driven stochastic reachability using Hilbert space embeddingsFast rates by transferring from auxiliary hypothesesLOCAL LEARNING ESTIMATES BY INTEGRAL OPERATORSLEAST SQUARE REGRESSION WITH COEFFICIENT REGULARIZATION BY GRADIENT DESCENTLeast-squares regularized regression with dependent samples andq-penaltyGeometry on probability spacesNonparametric stochastic approximation with large step-sizesEfficiency of classification methods based on empirical risk minimizationFully online classification by regularizationHermite learning with gradient dataFast rates of minimum error entropy with heavy-tailed noiseLearning rate of distribution regression with dependent samplesLearning with coefficient-based regularization and \(\ell^1\)-penaltyLeast squares regression with \(l_1\)-regularizer in sum spaceConsistency of learning algorithms using Attouch–Wets convergenceOptimal learning rates for least squares regularized regression with unbounded samplingLearning rates for the kernel regularized regression with a differentiable strongly convex lossJust interpolate: kernel ``ridgeless regression can generalizeConvergence of the forward-backward algorithm: beyond the worst-case with the help of geometrySpectral Algorithms for Supervised LearningERM learning with unbounded samplingConcentration estimates for learning with unbounded samplingMercer's theorem on general domains: on the interaction between measures, kernels, and RKHSsNonasymptotic analysis of robust regression with modified Huber's lossOptimal regression rates for SVMs using Gaussian kernelsMulti-output learning via spectral filteringA STUDY ON THE ERROR OF DISTRIBUTED ALGORITHMS FOR BIG DATA CLASSIFICATION WITH SVMConditional quantiles with varying GaussiansLearning with Convex Loss and Indefinite KernelsOnline Pairwise Learning AlgorithmsConvergence analysis of an empirical eigenfunction-based ranking algorithm with truncated sparsityDistributed learning and distribution regression of coefficient regularizationReproducing kernel Hilbert spaces associated with analytic translation-invariant Mercer kernelsDerivative reproducing properties for kernel methods in learning theoryLearning rates for kernel-based expectile regressionSharp estimates for eigenvalues of integral operators generated by dot product kernels on the sphereUnregularized online learning algorithms with general loss functionsCROSS-VALIDATION BASED ADAPTATION FOR REGULARIZATION OPERATORS IN LEARNING THEORYParzen windows for multi-class classificationLearning and approximation by Gaussians on Riemannian manifoldsConvergence rates of learning algorithms by random projectionLearning gradients by a gradient descent algorithmSupport vector machines regression with \(l^1\)-regularizerOn empirical eigenfunction-based ranking with \(\ell^1\) norm regularizationLearning from non-identical sampling for classificationConcentration estimates for learning with \(\ell ^{1}\)-regularizer and data dependent hypothesis spacesLearning rates of multi-kernel regularized regressionCoefficient-based regression with non-identical unbounded samplingLeast-square regularized regression with non-iid samplingRademacher Chaos Complexities for Learning the Kernel ProblemBalancing principle in supervised learning for a general regularization schemeSome properties of Gaussian reproducing kernel Hilbert spaces and their implications for function approximation and learning theoryError analysis of multicategory support vector machine classifiersONLINE LEARNING WITH MARKOV SAMPLINGOnline regularized pairwise learning with least squares lossAnalysis of support vector machines regressionSVM LEARNING AND Lp APPROXIMATION BY GAUSSIANS ON RIEMANNIAN MANIFOLDSA NOTE ON STABILITY OF ERROR BOUNDS IN STATISTICAL LEARNING THEORYHarder, Better, Faster, Stronger Convergence Rates for Least-Squares RegressionAn elementary analysis of ridge regression with random designUnnamed ItemHigh order Parzen windows and randomized samplingOnline Classification with Varying GaussiansExact minimax risk for linear least squares, and the lower tail of sample covariance matricesGeneralization performance of Gaussian kernels SVMC based on Markov samplingUnnamed ItemFunctional linear regression with Huber lossUnnamed ItemUnnamed ItemShannon sampling. II: Connections to learning theoryThresholded spectral algorithms for sparse approximationsRegularization: From Inverse Problems to Large-Scale Machine Learning




This page was built for publication: Model selection for regularized least-squares algorithm in learning theory