Learning theory estimates via integral operators and their approximations

From MaRDI portal
Publication:2642918

DOI10.1007/s00365-006-0659-yzbMath1127.68088OpenAlexW1970781863WikidataQ56169176 ScholiaQ56169176MaRDI QIDQ2642918

Ding-Xuan Zhou, Stephen Smale

Publication date: 6 September 2007

Published in: Constructive Approximation (Search for Journal in Brave)

Full work available at URL: https://doi.org/10.1007/s00365-006-0659-y




Related Items (only showing first 100 items - show all)

Error analysis on Hérmite learning with gradient dataOn the stability of reproducing kernel Hilbert spaces of discrete-time impulse responsesStatistical consistency of coefficient-based conditional quantile regressionNonparametric regression using needlet kernels for spherical dataMulti-penalty regularization in learning theoryGeometry on probability spacesNonparametric stochastic approximation with large step-sizesRegularization in kernel learningHermite learning with gradient dataRegularized least square regression with dependent samplesOptimal learning rates for kernel partial least squaresKernel-based conditional canonical correlation analysis via modified Tikhonov regularizationKernel methods for the approximation of some key quantities of nonlinear systemsLearning theory and approximation. Abstracts from the workshop held June 24--30, 2012.Distributed parametric and nonparametric regression with on-line performance bounds computationMulti-kernel regularized classifiersA linear functional strategy for regularized rankingRegularized least square regression with unbounded and dependent samplingSharp learning rates of coefficient-based \(l^q\)-regularized regression with indefinite kernelsIntegral operator approach to learning theory with unbounded samplingLearning with coefficient-based regularization and \(\ell^1\)-penaltyThe learning rate of \(l_2\)-coefficient regularized classification with strong lossLearning rates for least square regressions with coefficient regularizationLeast squares regression with \(l_1\)-regularizer in sum spaceAn extension of Mercer's theory to \(L^p\)Optimal learning rates for least squares regularized regression with unbounded samplingLeast square regression with indefinite kernels and coefficient regularizationLearning gradients via an early stopping gradient descent methodRandom design analysis of ridge regressionLearning theory approach to a system identification problem involving atomic normApproximation analysis of learning algorithms for support vector regression and quantile regressionAn empirical feature-based learning algorithm producing sparse approximationsOn complex-valued 2D eikonals. IV: continuation past a causticPrediction error identification of linear systems: a nonparametric Gaussian regression approachOn the regularized Laplacian eigenmapsERM learning with unbounded samplingError analysis for coefficient-based regularized regression in additive modelsLearning theory viewpoint of approximation by positive linear operatorsThe regularized least squares algorithm and the problem of learning halfspacesConcentration estimates for learning with unbounded samplingMercer's theorem on general domains: on the interaction between measures, kernels, and RKHSsConsistency analysis of spectral regularization algorithmsOptimal regression rates for SVMs using Gaussian kernelsEstimation of convergence rate for multi-regression learning algorithmSemi-supervised learning with the help of Parzen windowsError bounds for \(l^p\)-norm multiple kernel learning with least square lossNew robust unsupervised support vector machinesAdaptive kernel methods using the balancing principleKernel methods in system identification, machine learning and function estimation: a surveyClassification with non-i.i.d. samplingConvergence analysis of an empirical eigenfunction-based ranking algorithm with truncated sparsityIndefinite kernel network with \(l^q\)-norm regularizationAnalysis of approximation by linear operators on variable \(L_\rho^{p(\cdot)}\) spaces and applications in learning theoryConstructive analysis for least squares regression with generalized \(K\)-norm regularizationLearning rate of support vector machine for rankingConvergence rate of kernel canonical correlation analysisReproducing kernel Hilbert spaces associated with analytic translation-invariant Mercer kernelsOptimal rates for regularization of statistical inverse learning problemsDerivative reproducing properties for kernel methods in learning theoryConstructive analysis for coefficient regularization regression algorithmsKernel conjugate gradient methods with random projectionsPerturbation of convex risk minimization and its application in differential private learning algorithmsDistributed kernel-based gradient descent algorithmsStatistical performance of optimal scoring in reproducing kernel Hilbert spacesParzen windows for multi-class classificationGeneralization ability of online pairwise support vector machineLearning and approximation by Gaussians on Riemannian manifoldsAdaptive estimation for nonlinear systems using reproducing kernel Hilbert spacesThe convergence rate of a regularized ranking algorithmConsistency of regularized spectral clusteringApproximation analysis of gradient descent algorithm for bipartite rankingLearning from non-identical sampling for classificationMultivariate Bernstein-Durrmeyer operators with arbitrary weight functionsMoving least-square method in learning theoryConcentration estimates for learning with \(\ell ^{1}\)-regularizer and data dependent hypothesis spacesCoefficient-based regression with non-identical unbounded samplingA new kernel-based approach for linear system identificationAsymptotic expansion for neural network operators of the Kantorovich type and high order of approximationCoefficient-based \(l^q\)-regularized regression with indefinite kernels and unbounded samplingEstimations of singular functions of kernel cross-covariance operatorsImage and video colorization using vector-valued reproducing kernel Hilbert spacesAnalysis of regularized least squares for functional linear regression modelSystem identification using kernel-based regularization: new insights on stability and consistency issuesContinuum versus discrete networks, graph Laplacians, and reproducing kernel Hilbert spacesLearning sparse conditional distribution: an efficient kernel-based approachEstimates of the approximation error using Rademacher complexity: Learning vector-valued functionsThe \(\mathrm{r}\)-\(\mathrm{d}\) class predictions in linear mixed modelsA note on application of integral operator in learning theoryEstimation of the number of components of nonparametric multivariate finite mixture modelsTheory of deep convolutional neural networks. II: Spherical analysisAnalysis of support vector machines regressionElastic-net regularization in learning theoryOn a regularization of unsupervised domain adaptation in RKHSDebiased magnitude-preserving ranking: learning rate and bias characterizationAn elementary analysis of ridge regression with random designHigh order Parzen windows and randomized samplingGradient learning in a classification setting by gradient descentThresholding projection estimators in functional linear modelsExact minimax risk for linear least squares, and the lower tail of sample covariance matricesBayesian frequentist bounds for machine learning and system identification




This page was built for publication: Learning theory estimates via integral operators and their approximations