Neural network with unbounded activation functions is universal approximator

From MaRDI portal
Revision as of 19:39, 2 February 2024 by Import240129110113 (talk | contribs) (Created automatically from import240129110113)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Publication:2399647

DOI10.1016/j.acha.2015.12.005zbMath1420.68177arXiv1505.03654OpenAlexW3101806332MaRDI QIDQ2399647

Sho Sonoda, Noboru Murata

Publication date: 24 August 2017

Published in: Applied and Computational Harmonic Analysis (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/1505.03654




Related Items (35)

Machine learning from a continuous viewpoint. INonconvex regularization for sparse neural networksNeural dynamic sliding mode control of nonlinear systems with both matched and mismatched uncertaintiesOn the minimax optimality and superiority of deep neural network learning over sparse parameter spacesPiecewise linear functions representable with infinite width shallow ReLU neural networksTheory of deep convolutional neural networks. III: Approximating radial functionsUniversal approximation properties for an ODENet and a ResNet: mathematical analysis and numerical experimentsA survey on modern trainable activation functionsContinuity properties of the shearlet transform and the shearlet synthesis operator on the Lizorkin type spacesDeep reinforcement learning for adaptive mesh refinementThe role of nonpolynomiality in uniform approximation by RBF networks of Hankel translatesExplicit representations for Banach subspaces of Lizorkin distributionsRapid estimation of permeability from digital rock using 3D convolutional neural networkHeaviside function as an activation functionUnnamed ItemBeating a Benchmark: Dynamic Programming May Not Be the Right Numerical ApproachHilbert C∗-Module for Analyzing Structured DataUnnamed ItemUnnamed ItemCenter Manifold Analysis of Plateau Phenomena Caused by Degeneration of Three-Layer PerceptronAn Interpretive Constrained Linear Model for ResNet and MgNetA global universality of two-layer neural networks with ReLU activationsMisspecified diffusion models with high-frequency observations and an application to neural networksEstimation of agent-based models using Bayesian deep learning approach of BayesFlowA deep learning semiparametric regression for adjusting complex confounding structuresRegression methods in waveform modeling: a comparative studyGeometric deep learning for computational mechanics. I: Anisotropic hyperelasticityOn the double windowed ridgelet transform and its inverseDeep learning as optimal control problems: models and numerical methodsFast generalization error bound of deep learning without scale invariance of activation functionsTheory of deep convolutional neural networks. II: Spherical analysisA mean-field optimal control formulation of deep learningSymmetry \& critical points for a model shallow neural networkUnnamed ItemUnnamed Item


Uses Software


Cites Work


This page was built for publication: Neural network with unbounded activation functions is universal approximator