Learning theory estimates via integral operators and their approximations

From MaRDI portal
Revision as of 10:42, 3 February 2024 by Import240129110113 (talk | contribs) (Created automatically from import240129110113)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Publication:2642918

DOI10.1007/S00365-006-0659-YzbMath1127.68088OpenAlexW1970781863WikidataQ56169176 ScholiaQ56169176MaRDI QIDQ2642918

Ding-Xuan Zhou, Stephen Smale

Publication date: 6 September 2007

Published in: Constructive Approximation (Search for Journal in Brave)

Full work available at URL: https://doi.org/10.1007/s00365-006-0659-y




Related Items (only showing first 100 items - show all)

Machine learning with kernels for portfolio valuation and risk managementOnline regression with unbounded samplingCoefficient regularized regression with non-iid samplingEfficient kernel-based variable selection with sparsistencyNormal estimation on manifolds by gradient learningLOCAL LEARNING ESTIMATES BY INTEGRAL OPERATORSReproducing properties of differentiable Mercer-like kernelsLEAST SQUARE REGRESSION WITH COEFFICIENT REGULARIZATION BY GRADIENT DESCENTComplexity control in statistical learningSymmetric Measures, Continuous Networks, and DynamicsConsistent identification of Wiener systems: a machine learning viewpointLearning by atomic norm regularization with polynomial kernelsThe kernel regularized learning algorithm for solving Laplace equation with Dirichlet boundaryLeast-squares regularized regression with dependent samples andq-penaltyERM learning algorithm for multi-class classificationFully online classification by regularizationDistributed spectral pairwise ranking algorithmsTHE COEFFICIENT REGULARIZED REGRESSION WITH RANDOM PROJECTIONOnline regularized pairwise learning with non-i.i.d. observationsOptimal Quadrature-Sparsification for Integral Operator ApproximationLearning with sample dependent hypothesis spacesApplication of integral operator for regularized least-square regressionNonlinear predictive directions in clinical trialsA Bayesian approach to sparse dynamic network identificationLearning rates of kernel-based robust classificationThe optimal solution of multi-kernel regularization learningVon Neumann indices and classes of positive definite functionsThe consistency of least-square regularized regression with negative association sequenceUnnamed ItemUnnamed ItemUnnamed Item1-Norm support vector machine for ranking with exponentially strongly mixing sequenceGradient descent for robust kernel-based regressionReproducing kernels: harmonic analysis and some of their applicationsDeterministic error bounds for kernel-based learning techniques under bounded noiseAsymptotic expansions and Voronovskaja type theorems for the multivariate neural network operatorsDistributed learning with multi-penalty regularizationDecomposition of Gaussian processes, and factorization of positive definite kernelsExact asymptotic orders of various randomized widths on Besov classesLearning rates for the kernel regularized regression with a differentiable strongly convex lossQuantitative convergence analysis of kernel based large-margin unified machinesKernel-based maximum correntropy criterion with gradient descent methodThe convergence rate of semi-supervised regression with quadratic lossApproximation of Lyapunov functions from noisy dataOn the K-functional in learning theoryUnnamed ItemRegression learning with non-identically and non-independently samplingLearning rates of regularized regression for exponentially strongly mixing sequenceSpectral Algorithms for Supervised LearningNonasymptotic analysis of robust regression with modified Huber's lossERM and RERM are optimal estimators for regression problems when malicious outliers corrupt the labelsBias corrected regularization kernel method in rankingThe performance of semi-supervised Laplacian regularized regression with the least square lossStability and optimization error of stochastic gradient descent for pairwise learningDistributed learning and distribution regression of coefficient regularizationCoefficient-based regularization network with variance loss for errorReproducing kernels and choices of associated feature spaces, in the form of \(L^2\)-spacesKernel variable selection for multicategory support vector machinesSharp estimates for eigenvalues of integral operators generated by dot product kernels on the sphereRobust kernel-based distribution regressionREGULARIZED LEAST SQUARE ALGORITHM WITH TWO KERNELSConvergence rates of learning algorithms by random projectionLearning gradients by a gradient descent algorithmOn empirical eigenfunction-based ranking with \(\ell^1\) norm regularizationLeast-square regularized regression with non-iid samplingApplication of integral operator for vector-valued regression learningBalancing principle in supervised learning for a general regularization schemeSimultaneous estimations of optimal directions and optimal transformations for functional dataOptimal learning rates for distribution regressionLeast Square Regression with lp-Coefficient RegularizationApproximating and learning by Lipschitz kernel on the sphereError analysis of multicategory support vector machine classifiersONLINE LEARNING WITH MARKOV SAMPLINGThe information-based complexity of approximation problem by adaptive Monte Carlo methodsLearning rates of gradient descent algorithm for classificationProbabilistic error bounds on constraint violation for empirical-analytical Lagrangian models of motionOptimal rates for spectral algorithms with least-squares regression over Hilbert spacesRegularized Nyström subsampling in regression and ranking problems under general smoothness assumptionsMoving quantile regressionConvergence Rates of Spectral Regularization Methods: A Comparison between Ill-Posed Inverse Problems and Statistical Kernel LearningFast and strong convergence of online learning algorithmsGradient-Based Kernel Dimension Reduction for RegressionSVM LEARNING AND Lp APPROXIMATION BY GAUSSIANS ON RIEMANNIAN MANIFOLDSA NOTE ON STABILITY OF ERROR BOUNDS IN STATISTICAL LEARNING THEORYRegularized ranking with convex losses and \(\ell^1\)-penaltyReproducing Kernel Banach Spaces with the ℓ1 Norm II: Error Analysis for Regularized Least Square RegressionUnnamed ItemUnnamed ItemOptimal rates for coefficient-based regularized regressionBoosting as a kernel-based methodINDEFINITE KERNEL NETWORK WITH DEPENDENT SAMPLINGHalf supervised coefficient regularization for regression learning with unbounded samplingUnnamed ItemFunctional linear regression with Huber lossUnnamed ItemUnnamed ItemUnnamed ItemError analysis of the kernel regularized regression based on refined convex losses and RKBSsCONVERGENCE ANALYSIS OF COEFFICIENT-BASED REGULARIZATION UNDER MOMENT INCREMENTAL CONDITIONRegularization: From Inverse Problems to Large-Scale Machine Learning







This page was built for publication: Learning theory estimates via integral operators and their approximations