On the representation of continuous functions of many variables by superposition of continuous functions of one variable and addition
From MaRDI portal
Publication:5330769
DOI10.1090/TRANS2/028/04zbMATH Open0125.30803OpenAlexW4238143865MaRDI QIDQ5330769FDOQ5330769
Authors: A. N. Kolmogorov
Publication date: 1963
Published in: Seventeen Papers on Analysis (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1090/trans2/028/04
Cited In (39)
- Non‐linear Processing in Artificial Synapses
- Representations of hypergraph states with neural networks*
- Representations of graph states with neural networks
- An improvement in the superposition theorem of Kolmogorov
- On the shape of posterior densities and credible sets in instrumental variable regression models with reduced rank: an application of flexible sampling methods using neural networks
- Monte Carlo method for pricing lookback type options in Lévy models
- Resolving \(G\)-torsors by abelian base extensions.
- A universal mapping for Kolmogorov's superposition theorem
- Computational aspects of Kolmogorov's superposition theorem
- Approximative versions of Kolmogorov's superposition theorem, proved constructively
- A survey of solved and unsolved problems on superpositions of functions
- A Superposition Theorem for Bounded Continuous Functions
- A description of continua basically embeddable in \(\mathbb{R}^ 2\)
- Combined small scale high dimensional model representation
- A note on computing with Kolmogorov superpositions without iterations
- A nonlinear rainfall-runoff model using neural network technique: Example in fractured porous media
- Approximate complexity and functional representation
- Approximation from the topological viewpoint
- An upper bound for higher topological complexity and higher strongly equivariant complexity
- Title not available (Why is that?)
- Dimension and superposition of bounded continuous functions on locally compact, separable metric spaces
- A property of 𝐶_{𝑝}[0,1]
- Chebyshev approximation by discrete superposition. Application to neural networks
- A representation method for PWL functions oriented to parallel processing
- A note on the representation of continuous functions by linear superpositions
- Approximation by a sum of two algebras. The lightning bolt principle
- Open problems on graphs arising from geometric topology
- Optimization of neural network training for image recognition based on trigonometric polynomial approximation
- Linear programming, recurrent associative memories, and feed-forward neural networks
- Search for the global extremum using the correlation indicator for neural networks supervised learning
- Superpositions of continuous functions
- Multilayer perceptrons and radial basis function neural network methods for the solution of differential equations: a survey
- On a solution to a nonlocal inverse coefficient problem using feed-forward neural networks
- Sur le théorème de superposition de Kolmogorov
- An approximation method for the optimization of continuous functions ofnvariables by densifying their domains
- Modeling of complex dynamic systems using differential neural networks with the incorporation of a priori knowledge
- Free subspaces of free locally convex spaces
- A rapprochement of the theories of radiative transfer and linear stochastic estimation
- Complexity of functions: Some questions, conjectures, and results
This page was built for publication: On the representation of continuous functions of many variables by superposition of continuous functions of one variable and addition
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5330769)