Derivative reproducing properties for kernel methods in learning theory
From MaRDI portal
Publication:939547
DOI10.1016/j.cam.2007.08.023zbMath1152.68049OpenAlexW1994655188MaRDI QIDQ939547
Publication date: 22 August 2008
Published in: Journal of Computational and Applied Mathematics (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.cam.2007.08.023
learning theoryreproducing kernel Hilbert spacessemi-supervised learningHermite learningrepresenter theoremderivative reproducing
General nonlinear regression (62J02) Learning and adaptive systems in artificial intelligence (68T05)
Related Items
Efficient kernel-based variable selection with sparsistency, Error analysis on Hérmite learning with gradient data, Reproducing properties of differentiable Mercer-like kernels, Learning by atomic norm regularization with polynomial kernels, The kernel regularized learning algorithm for solving Laplace equation with Dirichlet boundary, Hermite learning with gradient data, Prediction of dynamical time series using kernel based regression and smooth splines, Operator-valued positive definite kernels and differentiable universality, Variable selection based on squared derivative averages, Optimal learning rates for least squares regularized regression with unbounded sampling, Hilbert–Schmidt regularity of symmetric integral operators on bounded domains with applications to SPDE approximations, Overcoming the timescale barrier in molecular dynamics: Transfer operators, variational principles and machine learning, Model Reduction for Nonlinear Systems by Balanced Truncation of State and Gradient Covariance, Learning theory approach to a system identification problem involving atomic norm, The convergence rate of semi-supervised regression with quadratic loss, Structure learning via unstructured kernel-based M-estimation, Ensemble forecasts in reproducing kernel Hilbert space family, Concentration estimates for learning with unbounded sampling, Estimates on the derivatives and analyticity of positive definite functions on \(\mathbb{R}^m\), The performance of semi-supervised Laplacian regularized regression with the least square loss, Direct Estimation of the Derivative of Quadratic Mutual Information with Application in Supervised Dimension Reduction, Sampling Inequalities and Support Vector Machines for Galerkin Type Data, Reproducing Properties of Differentiable Mercer-Like Kernels on the Sphere, Testing if a nonlinear system is additive or not, Kernel variable selection for multicategory support vector machines, Reproducing Properties of Holomorphic Kernels on Balls of ℂq, Learning from non-identical sampling for classification, Differentiability of bizonal positive definite kernels on complex spheres, Unnamed Item, p-kernel Stein variational gradient descent for data assimilation and history matching, Learning sparse conditional distribution: an efficient kernel-based approach, Variable Selection for Nonparametric Learning with Power Series Kernels, Performance analysis of the LapRSSLG algorithm in learning theory, Discovering model structure for partially linear models, ONLINE REGRESSION WITH VARYING GAUSSIANS AND NON-IDENTICAL DISTRIBUTIONS, Universal kernels which are continuous on the diagonal, Maximum likelihood estimation for Gaussian processes under inequality constraints, Gradient learning in a classification setting by gradient descent, On the speed of uniform convergence in Mercer's theorem, Fundamental Sets of Functions on Locally Compact Abelian Groups
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Semi-supervised learning on Riemannian manifolds
- Model selection for regularized least-squares algorithm in learning theory
- The covering number in learning theory
- Regularization networks and support vector machines
- Convergence analysis of online algorithms
- Approximation with polynomial kernels and SVM classifiers
- Linearly constrained reconstruction of functions by kernels with applications to machine learning
- Learning theory: stability is sufficient for generalization and necessary and sufficient for consistency of empirical risk minimization
- Change of variables for absolutely continuous functions
- Learning theory estimates via integral operators and their approximations
- Capacity of reproducing kernel spaces in learning theory
- ESTIMATING THE APPROXIMATION ERROR IN LEARNING THEORY
- Structural risk minimization over data-dependent hierarchies
- Shannon sampling and function reconstruction from point values
- Neural Network Learning
- On Complexity Issues of Online Learning Algorithms
- Theory of Reproducing Kernels