Spline representation and redundancies of one-dimensional ReLU neural network models
From MaRDI portal
Publication:5873929
DOI10.1142/S0219530522400103MaRDI QIDQ5873929FDOQ5873929
Authors: Yannick Riebe, Gerlind Plonka, Yu. S. Kolomoitsev
Publication date: 10 February 2023
Published in: Analysis and Applications (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2207.14609
Recommendations
- Neural splines: exploiting parallelism for function approximation using modular neural networks
- Approximation by Ridge functions and neural networks with one hidden layer
- ExSpliNet: An interpretable and expressive spline-based neural network
- Banach space representer theorems for neural networks and ridge splines
- Approximation Algorithms for Training One-Node ReLU Neural Networks
- Nonlinear approximation and (deep) ReLU networks
Artificial neural networks and deep learning (68T07) Numerical interpolation (65D05) Spline approximation (41A15)
Cites Work
- A practical guide to splines.
- Title not available (Why is that?)
- Multilayer feedforward networks are universal approximators
- Approximation by superpositions of a sigmoidal function
- Title not available (Why is that?)
- Optimal approximation of piecewise smooth functions using deep ReLU neural networks
- Error bounds for approximations with deep ReLU networks
- Universality of deep convolutional neural networks
- Deep neural networks for rotation-invariance approximation and learning
- Neural Networks for Localized Approximation
- Optimal approximation with sparsely connected deep neural networks
- Neural network approximation
- Nonlinear approximation and (deep) ReLU networks
- Deep Network Approximation for Smooth Functions
- Deep network approximation characterized by number of neurons
Cited In (8)
- Why rectified linear activation functions? Why max-pooling? A possible explanation
- Neural splines: exploiting parallelism for function approximation using modular neural networks
- A representer theorem for deep neural networks
- Relevant sampling in a reproducing kernel subspace of Orlicz space
- Approximation of Lipschitz Functions Using Deep Spline Neural Networks
- Spline Representation and Redundancies of One-Dimensional ReLU Neural Network Models
- ReLU deep neural networks and linear finite elements
- An embedding of ReLU networks and an analysis of their identifiability
This page was built for publication: Spline representation and redundancies of one-dimensional ReLU neural network models
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5873929)