Derivative reproducing properties for kernel methods in learning theory

From MaRDI portal
Publication:939547

DOI10.1016/j.cam.2007.08.023zbMath1152.68049OpenAlexW1994655188MaRDI QIDQ939547

Ding-Xuan Zhou

Publication date: 22 August 2008

Published in: Journal of Computational and Applied Mathematics (Search for Journal in Brave)

Full work available at URL: https://doi.org/10.1016/j.cam.2007.08.023



Related Items

Efficient kernel-based variable selection with sparsistency, Error analysis on Hérmite learning with gradient data, Reproducing properties of differentiable Mercer-like kernels, Learning by atomic norm regularization with polynomial kernels, The kernel regularized learning algorithm for solving Laplace equation with Dirichlet boundary, Hermite learning with gradient data, Prediction of dynamical time series using kernel based regression and smooth splines, Operator-valued positive definite kernels and differentiable universality, Variable selection based on squared derivative averages, Optimal learning rates for least squares regularized regression with unbounded sampling, Hilbert–Schmidt regularity of symmetric integral operators on bounded domains with applications to SPDE approximations, Overcoming the timescale barrier in molecular dynamics: Transfer operators, variational principles and machine learning, Model Reduction for Nonlinear Systems by Balanced Truncation of State and Gradient Covariance, Learning theory approach to a system identification problem involving atomic norm, The convergence rate of semi-supervised regression with quadratic loss, Structure learning via unstructured kernel-based M-estimation, Ensemble forecasts in reproducing kernel Hilbert space family, Concentration estimates for learning with unbounded sampling, Estimates on the derivatives and analyticity of positive definite functions on \(\mathbb{R}^m\), The performance of semi-supervised Laplacian regularized regression with the least square loss, Direct Estimation of the Derivative of Quadratic Mutual Information with Application in Supervised Dimension Reduction, Sampling Inequalities and Support Vector Machines for Galerkin Type Data, Reproducing Properties of Differentiable Mercer-Like Kernels on the Sphere, Testing if a nonlinear system is additive or not, Kernel variable selection for multicategory support vector machines, Reproducing Properties of Holomorphic Kernels on Balls of ℂq, Learning from non-identical sampling for classification, Differentiability of bizonal positive definite kernels on complex spheres, Unnamed Item, p-kernel Stein variational gradient descent for data assimilation and history matching, Learning sparse conditional distribution: an efficient kernel-based approach, Variable Selection for Nonparametric Learning with Power Series Kernels, Performance analysis of the LapRSSLG algorithm in learning theory, Discovering model structure for partially linear models, ONLINE REGRESSION WITH VARYING GAUSSIANS AND NON-IDENTICAL DISTRIBUTIONS, Universal kernels which are continuous on the diagonal, Maximum likelihood estimation for Gaussian processes under inequality constraints, Gradient learning in a classification setting by gradient descent, On the speed of uniform convergence in Mercer's theorem, Fundamental Sets of Functions on Locally Compact Abelian Groups



Cites Work