A representer theorem for deep kernel learning
From MaRDI portal
Publication:5381118
zbMath1489.62197arXiv1709.10441MaRDI QIDQ5381118
Christian Rieger, Bastian Bohn, Michael Griebel
Publication date: 7 June 2019
Full work available at URL: https://arxiv.org/abs/1709.10441
artificial neural networksrepresenter theoremdeep kernel learningmultilayer kernelregularized least-squares regression
Nonparametric regression and quantile regression (62G08) Linear regression; mixed models (62J05) Analysis of algorithms (68W40) Artificial neural networks and deep learning (68T07) Learning and adaptive systems in artificial intelligence (68T05)
Related Items (8)
Unnamed Item ⋮ What Kinds of Functions Do Deep Neural Networks Learn? Insights from Variational Spline Theory ⋮ Statistical inference using regularized M-estimation in the reproducing kernel Hilbert space for handling missing data ⋮ Learning rates for the kernel regularized regression with a differentiable strongly convex loss ⋮ Data-Driven Kernel Designs for Optimized Greedy Schemes: A Machine Learning Perspective ⋮ A unifying representer theorem for inverse problems and machine learning ⋮ Unnamed Item ⋮ Do ideas have shape? Idea registration as the continuous limit of artificial neural networks
Cites Work
- Unnamed Item
- Unnamed Item
- Optimal quasi-Monte Carlo rules on order 2 digital nets for the numerical integration of multivariate periodic functions
- Reproducing kernels of generalized Sobolev spaces via a Green function approach with distributional operators
- Stability of kernel-based interpolation
- Support Vector Machines
- Error Estimates for Multivariate Regression on Discretized Function Spaces
- Sparse grids
- Approximation of bi-variate functions: singular value decomposition versus sparse grids
- A Correspondence Between Bayesian Estimation on Stochastic Processes and Smoothing by Splines
- On Learning Vector-Valued Functions
- Theory of Reproducing Kernels
- Scattered Data Approximation
- Approximation by superpositions of a sigmoidal function
This page was built for publication: A representer theorem for deep kernel learning