Organization of the state space of a simple recurrent network before and after training on recursive linguistic structures
From MaRDI portal
Publication:872410
DOI10.1016/j.neunet.2006.01.020zbMath1111.68097OpenAlexW2109101264WikidataQ51941965 ScholiaQ51941965MaRDI QIDQ872410
Michal Čerňanský, Ľubica Beňušková, Matej Makula
Publication date: 27 March 2007
Published in: Neural Networks (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.neunet.2006.01.020
recurrent neural networkslanguage processingvariable length Markov modellinguistic structuresMarkovian architectural biasneural prediction machinesnext-symbol predictionstate space analysis
Learning and adaptive systems in artificial intelligence (68T05) Natural language processing (68T50)
Cites Work
- The power of amnesia: Learning probabilistic automata with variable memory length
- On the Emergence of Rules in Neural Networks
- Recurrent Neural Networks with Small Weights Implement Definite Memory Machines
- Architectural Bias in Recurrent Neural Networks: Fractal Analysis
- Predicting the future of discrete sequences from fractal representations of the past