On Approximation by Neural Networks with Optimized Activation Functions and Fixed Weights
From MaRDI portal
Publication:5882452
Recommendations
- Estimates for approximation error of feedforward neural networks
- A formula for the approximation of functions by single hidden layer neural networks with weights from two straight lines
- Computing the approximation error for neural networks with weights varying on fixed directions
- Construction and approximation rate for feedforward neural network operators with sigmoidal functions
- Neural networks with single hidden layer and the best polynomial approximation
Cites work
- An approximation by neural networks with a fixed weight
- Approximation by Ridge functions and neural networks with one hidden layer
- Approximation by neural network operators activated by smooth ramp functions
- Approximation by neural networks with sigmoidal functions
- Approximation by superposition of sigmoidal and radial basis functions
- Approximation by superpositions of a sigmoidal function
- Degree of approximation by neural and translation networks with a single hidden layer
- Error estimates for the modified truncations of approximate approximation with Gaussian kernels
- Interpolation by neural network operators activated by ramp functions
- Multilayer feedforward networks are universal approximators
- On approximation by univariate sigmoidal neural networks
- Rate of convergence of some neural network operators to the unit-univariate case
- Rates of approximation by neural networks with four layers
- The approximation operators with sigmoidal functions
- The essential order of approximation for neural networks
- Uniform approximation by neural networks
- Universal approximation bounds for superpositions of a sigmoidal function
Cited in
(9)- Construction and approximation rate for feedforward neural network operators with sigmoidal functions
- Approximation by neural networks with weights varying on a finite set of directions
- On the antiderivatives of \(x^p/(1 - x)\) with an application to optimize loss functions for classification with neural networks
- On the approximation by single hidden layer feedforward neural networks with fixed weights
- Hölder continuous activation functions in neural networks
- Approximation rates for neural networks with general activation functions
- Neural network interpolation operators activated by smooth ramp functions
- Approximation by network operators with logistic activation functions
- Approximation with neural networks activated by ramp sigmoids
This page was built for publication: On Approximation by Neural Networks with Optimized Activation Functions and Fixed Weights
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5882452)