On Approximation by Neural Networks with Optimized Activation Functions and Fixed Weights
From MaRDI portal
Publication:5882452
DOI10.4208/ATA.OA-2021-0006MaRDI QIDQ5882452FDOQ5882452
Fengjun Li, Dansheng Yu, Yunyou Qian
Publication date: 16 March 2023
Published in: Analysis in Theory and Applications (Search for Journal in Brave)
Approximation by rational functions (41A20) Rate of convergence, degree of approximation (41A25) Approximation by operators (in particular, by integral operators) (41A35)
Cites Work
- Universal approximation bounds for superpositions of a sigmoidal function
- Multilayer feedforward networks are universal approximators
- Approximation by superpositions of a sigmoidal function
- The approximation operators with sigmoidal functions
- Uniform approximation by neural networks
- Approximation by Ridge functions and neural networks with one hidden layer
- Approximation by superposition of sigmoidal and radial basis functions
- Rate of convergence of some neural network operators to the unit-univariate case
- Degree of approximation by neural and translation networks with a single hidden layer
- Title not available (Why is that?)
- Error estimates for the modified truncations of approximate approximation with Gaussian kernels
- An approximation by neural networks with a fixed weight
- Approximation by neural networks with sigmoidal functions
- The essential order of approximation for neural networks
- Interpolation by neural network operators activated by ramp functions
- On approximation by univariate sigmoidal neural networks
- Title not available (Why is that?)
Cited In (9)
- Hölder continuous activation functions in neural networks
- On the approximation by single hidden layer feedforward neural networks with fixed weights
- Approximation by network operators with logistic activation functions
- On the antiderivatives of \(x^p/(1 - x)\) with an application to optimize loss functions for classification with neural networks
- Construction and approximation rate for feedforward neural network operators with sigmoidal functions
- Neural network interpolation operators activated by smooth ramp functions
- Approximation with neural networks activated by ramp sigmoids
- Approximation rates for neural networks with general activation functions
- Approximation by neural networks with weights varying on a finite set of directions
This page was built for publication: On Approximation by Neural Networks with Optimized Activation Functions and Fixed Weights
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5882452)