On the computational power of neural nets
From MaRDI portal
Publication:1892212
DOI10.1006/jcss.1995.1013zbMath0826.68104OpenAlexW2067619114MaRDI QIDQ1892212
Hava T. Siegelmann, Eduardo D. Sontag
Publication date: 5 July 1995
Published in: Journal of Computer and System Sciences (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1006/jcss.1995.1013
Related Items
Spiking Neural P Systems with Astrocytes ⋮ Computability with low-dimensional dynamical systems ⋮ A statistical model of neural network learning via the Cramer-Rao lower bound ⋮ Analog quantum computing (AQC) and the need for time-symmetric physics ⋮ Metric entropy limits on recurrent neural network learning of linear dynamical systems ⋮ On The Complexity of Bounded Time Reachability for Piecewise Affine Systems ⋮ Diffusive Influence Systems ⋮ A family of universal recurrent networks ⋮ On the complexity of bounded time and precision reachability for piecewise affine systems ⋮ Recurrent Neural Networks with Small Weights Implement Definite Memory Machines ⋮ Spiking neural P systems with a flat maximally parallel use of rules ⋮ Complexity of reachability problems for finite discrete dynamical systems ⋮ The Computational Power of Interactive Recurrent Neural Networks ⋮ General-Purpose Computation with Neural Networks: A Survey of Complexity Theoretic Results ⋮ Computation with perturbed dynamical systems ⋮ AN ANALOGUE-DIGITAL CHURCH-TURING THESIS ⋮ On probabilistic analog automata ⋮ Simulation of Turing machine with uEAC-computable functions ⋮ Expressive Power of Non-deterministic Evolving Recurrent Neural Networks in Terms of Their Attractor Dynamics ⋮ The ARNN model relativises \(\mathrm{P}=\mathrm{NP}\) and \(\mathrm{P}\neq \mathrm{NP}\) ⋮ Subrecursive neural networks ⋮ Complete controllability of continuous-time recurrent neural networks ⋮ Three analog neurons are Turing universal ⋮ Design of continuous-time recurrent neural networks with piecewise-linear activation function for generation of prescribed sequences of bipolar vectors ⋮ A brief review of neural networks based learning and control and their applications for robots ⋮ Formal languages and the NLP black box ⋮ Universality of gradient descent neural network training ⋮ The Power of Machines That Control Experiments ⋮ Unnamed Item ⋮ Noise-robust realization of Turing-complete cellular automata by using neural networks with pattern representation ⋮ Asynchronous spiking neural P systems with local synchronization of rules ⋮ Nonlinear spiking neural P systems with multiple channels ⋮ A provably stable neural network Turing machine with finite precision and time ⋮ Designing universal causal deep learning models: The geometric (Hyper)transformer ⋮ On the decidability of reachability in continuous time linear time-invariant systems ⋮ The expressive power of analog recurrent neural networks on infinite input streams ⋮ Quasi-periodic \(\beta\)-expansions and cut languages ⋮ Computational capabilities of analog and evolving neural networks over infinite input streams ⋮ Energy Complexity of Recurrent Neural Networks ⋮ Positive Neural Networks in Discrete Time Implement Monotone-Regular Behaviors ⋮ Symbolic Computation Using Cellular Automata-Based Hyperdimensional Computing ⋮ Artificial neural networks trained through deep reinforcement learning discover control strategies for active flow control ⋮ Analog computation through high-dimensional physical chaotic neuro-dynamics ⋮ The nature of the extended analog computer ⋮ Where do Bayesian priors come from? ⋮ Automata complete computation with Hodgkin-Huxley neural networks composed of synfire rings ⋮ Analog neuron hierarchy ⋮ On the computation of Boolean functions by analog circuits of bounded fan-in ⋮ Pictorial reasoning with cell assemblies ⋮ A review on deep reinforcement learning for fluid mechanics ⋮ A survey of computational complexity results in systems and control ⋮ Spiking neural P systems with target indications ⋮ Boundedness of the Domain of Definition is Undecidable for Polynomial ODEs ⋮ Computability with polynomial differential equations ⋮ Cut Languages in Rational Bases ⋮ Spiking neural P systems with structural plasticity and anti-spikes ⋮ Deciding stability and mortality of piecewise affine dynamical systems ⋮ The stability of saturated linear dynamical systems is undecidable ⋮ Transiently chaotic neural networks with piecewise linear output functions ⋮ Rule Extraction from Recurrent Neural Networks: ATaxonomy and Review ⋮ The case for hypercomputation ⋮ How much can analog and hybrid systems be proved (super-)Turing ⋮ Simple Recurrent Networks Learn Context-Free and Context-Sensitive Languages by Counting ⋮ The functions of finite support: a canonical learning problem ⋮ Expressive power of first-order recurrent neural networks determined by their attractor dynamics ⋮ Synthesizing context-free grammars from recurrent neural networks ⋮ Continuous-Time Symmetric Hopfield Nets Are Computationally Universal ⋮ Stack-like and queue-like dynamics in recurrent neural networks ⋮ Learning Beyond Finite Memory in Recurrent Networks of Spiking Neurons ⋮ Vapnik-Chervonenkis dimension of recurrent neural networks ⋮ Average-Case Completeness in Tag Systems ⋮ ON COMPUTATION OVER CHAOS USING NEURAL NETWORKS: APPLICATION TO BLIND SEARCH AND RANDOM NUMBER GENERATION ⋮ The computational limits to the cognitive power of the neuroidal tabula rasa ⋮ Universal Neural Field Computation ⋮ Stochastic analog networks and computational complexity ⋮ Inverse problems in dynamic cognitive modeling ⋮ Analog computation with dynamical systems ⋮ From Hopfield nets to recursive networks to graph machines: numerical machine learning for structured data ⋮ Hamilton's rule, the evolution of behavior rules and the wizardry of control theory ⋮ A characterization of polynomial time computable functions from the integers to the reals using discrete ordinary differential equations ⋮ Continuous-Time Symmetric Hopfield Nets Are Computationally Universal ⋮ A Nonautonomous Equation Discovery Method for Time Signal Classification ⋮ A Survey on Analog Models of Computation