On the computational power of neural nets

From MaRDI portal
Publication:1892212

DOI10.1006/jcss.1995.1013zbMath0826.68104OpenAlexW2067619114MaRDI QIDQ1892212

Hava T. Siegelmann, Eduardo D. Sontag

Publication date: 5 July 1995

Published in: Journal of Computer and System Sciences (Search for Journal in Brave)

Full work available at URL: https://doi.org/10.1006/jcss.1995.1013




Related Items

Spiking Neural P Systems with AstrocytesComputability with low-dimensional dynamical systemsA statistical model of neural network learning via the Cramer-Rao lower boundAnalog quantum computing (AQC) and the need for time-symmetric physicsMetric entropy limits on recurrent neural network learning of linear dynamical systemsOn The Complexity of Bounded Time Reachability for Piecewise Affine SystemsDiffusive Influence SystemsA family of universal recurrent networksOn the complexity of bounded time and precision reachability for piecewise affine systemsRecurrent Neural Networks with Small Weights Implement Definite Memory MachinesSpiking neural P systems with a flat maximally parallel use of rulesComplexity of reachability problems for finite discrete dynamical systemsThe Computational Power of Interactive Recurrent Neural NetworksGeneral-Purpose Computation with Neural Networks: A Survey of Complexity Theoretic ResultsComputation with perturbed dynamical systemsAN ANALOGUE-DIGITAL CHURCH-TURING THESISOn probabilistic analog automataSimulation of Turing machine with uEAC-computable functionsExpressive Power of Non-deterministic Evolving Recurrent Neural Networks in Terms of Their Attractor DynamicsThe ARNN model relativises \(\mathrm{P}=\mathrm{NP}\) and \(\mathrm{P}\neq \mathrm{NP}\)Subrecursive neural networksComplete controllability of continuous-time recurrent neural networksThree analog neurons are Turing universalDesign of continuous-time recurrent neural networks with piecewise-linear activation function for generation of prescribed sequences of bipolar vectorsA brief review of neural networks based learning and control and their applications for robotsFormal languages and the NLP black boxUniversality of gradient descent neural network trainingThe Power of Machines That Control ExperimentsUnnamed ItemNoise-robust realization of Turing-complete cellular automata by using neural networks with pattern representationAsynchronous spiking neural P systems with local synchronization of rulesNonlinear spiking neural P systems with multiple channelsA provably stable neural network Turing machine with finite precision and timeDesigning universal causal deep learning models: The geometric (Hyper)transformerOn the decidability of reachability in continuous time linear time-invariant systemsThe expressive power of analog recurrent neural networks on infinite input streamsQuasi-periodic \(\beta\)-expansions and cut languagesComputational capabilities of analog and evolving neural networks over infinite input streamsEnergy Complexity of Recurrent Neural NetworksPositive Neural Networks in Discrete Time Implement Monotone-Regular BehaviorsSymbolic Computation Using Cellular Automata-Based Hyperdimensional ComputingArtificial neural networks trained through deep reinforcement learning discover control strategies for active flow controlAnalog computation through high-dimensional physical chaotic neuro-dynamicsThe nature of the extended analog computerWhere do Bayesian priors come from?Automata complete computation with Hodgkin-Huxley neural networks composed of synfire ringsAnalog neuron hierarchyOn the computation of Boolean functions by analog circuits of bounded fan-inPictorial reasoning with cell assembliesA review on deep reinforcement learning for fluid mechanicsA survey of computational complexity results in systems and controlSpiking neural P systems with target indicationsBoundedness of the Domain of Definition is Undecidable for Polynomial ODEsComputability with polynomial differential equationsCut Languages in Rational BasesSpiking neural P systems with structural plasticity and anti-spikesDeciding stability and mortality of piecewise affine dynamical systemsThe stability of saturated linear dynamical systems is undecidableTransiently chaotic neural networks with piecewise linear output functionsRule Extraction from Recurrent Neural Networks: ATaxonomy and ReviewThe case for hypercomputationHow much can analog and hybrid systems be proved (super-)TuringSimple Recurrent Networks Learn Context-Free and Context-Sensitive Languages by CountingThe functions of finite support: a canonical learning problemExpressive power of first-order recurrent neural networks determined by their attractor dynamicsSynthesizing context-free grammars from recurrent neural networksContinuous-Time Symmetric Hopfield Nets Are Computationally UniversalStack-like and queue-like dynamics in recurrent neural networksLearning Beyond Finite Memory in Recurrent Networks of Spiking NeuronsVapnik-Chervonenkis dimension of recurrent neural networksAverage-Case Completeness in Tag SystemsON COMPUTATION OVER CHAOS USING NEURAL NETWORKS: APPLICATION TO BLIND SEARCH AND RANDOM NUMBER GENERATIONThe computational limits to the cognitive power of the neuroidal tabula rasaUniversal Neural Field ComputationStochastic analog networks and computational complexityInverse problems in dynamic cognitive modelingAnalog computation with dynamical systemsFrom Hopfield nets to recursive networks to graph machines: numerical machine learning for structured dataHamilton's rule, the evolution of behavior rules and the wizardry of control theoryA characterization of polynomial time computable functions from the integers to the reals using discrete ordinary differential equationsContinuous-Time Symmetric Hopfield Nets Are Computationally UniversalA Nonautonomous Equation Discovery Method for Time Signal ClassificationA Survey on Analog Models of Computation