A direct approach for function approximation on data defined manifolds
From MaRDI portal
Publication:2057766
DOI10.1016/j.neunet.2020.08.018zbMath1475.68319arXiv1908.00156OpenAlexW3080284981WikidataQ99413653 ScholiaQ99413653MaRDI QIDQ2057766
Publication date: 7 December 2021
Published in: Neural Networks (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1908.00156
Statistics on manifolds (62R30) Artificial neural networks and deep learning (68T07) Approximation by polynomials (41A10) Approximation by operators (in particular, by integral operators) (41A35)
Related Items
A deep network construction that adapts to intrinsic dimensionality beyond the domain, Tikhonov regularization for polynomial approximation problems in Gauss quadrature points, Construct Deep Neural Networks based on Direct Sampling Methods for Solving Electrical Impedance Tomography
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- A generalized diffusion frame for parsimonious representation of functions on data defined manifolds
- Marcinkiewicz-Zygmund measures on manifolds
- Semi-supervised learning on Riemannian manifolds
- A Fourier-invariant method for locating point-masses and computing their attributes
- Networks and the best approximation property
- Towards a theoretical foundation for Laplacian-based manifold methods
- Eignets for function approximation on manifolds
- A unified method for super-resolution recovery and real exponential-sum separation
- A unified framework for harmonic analysis of functions on directed graphs and changing data
- Bigeometric organization of deep nets
- When is approximation by Gaussian networks necessarily a linear process?
- An analysis of training and generalization errors in shallow and deep networks
- Diffusion polynomial frames on metric measure spaces
- From graph to manifold Laplacian: the convergence rate
- On the mathematical foundations of learning
- Deep vs. shallow networks: An approximation theory perspective
- Learning Theory
- Universal local parametrizations via heat kernels and eigenfunctions of the Laplacian
- Laplacian Eigenmaps for Dimensionality Reduction and Data Representation
- Mean Convergence of Expansions in Laguerre and Hermite Series
- Local Approximation Using Hermite Functions