A representer theorem for deep kernel learning
From MaRDI portal
Publication:5381118
zbMath1489.62197arXiv1709.10441MaRDI QIDQ5381118
Christian Rieger, Bastian Bohn, Michael Griebel
Publication date: 7 June 2019
Full work available at URL: https://arxiv.org/abs/1709.10441
artificial neural networks; representer theorem; deep kernel learning; multilayer kernel; regularized least-squares regression
62G08: Nonparametric regression and quantile regression
62J05: Linear regression; mixed models
68W40: Analysis of algorithms
68T07: Artificial neural networks and deep learning
68T05: Learning and adaptive systems in artificial intelligence
Related Items
Unnamed Item, Unnamed Item, What Kinds of Functions Do Deep Neural Networks Learn? Insights from Variational Spline Theory, Statistical inference using regularized M-estimation in the reproducing kernel Hilbert space for handling missing data, Do ideas have shape? Idea registration as the continuous limit of artificial neural networks, Learning rates for the kernel regularized regression with a differentiable strongly convex loss, A unifying representer theorem for inverse problems and machine learning
Cites Work
- Unnamed Item
- Unnamed Item
- Optimal quasi-Monte Carlo rules on order 2 digital nets for the numerical integration of multivariate periodic functions
- Reproducing kernels of generalized Sobolev spaces via a Green function approach with distributional operators
- Stability of kernel-based interpolation
- Support Vector Machines
- Error Estimates for Multivariate Regression on Discretized Function Spaces
- Sparse grids
- Approximation of bi-variate functions: singular value decomposition versus sparse grids
- A Correspondence Between Bayesian Estimation on Stochastic Processes and Smoothing by Splines
- On Learning Vector-Valued Functions
- Theory of Reproducing Kernels
- Scattered Data Approximation
- Approximation by superpositions of a sigmoidal function