Totally model-free actor-critic recurrent neural-network reinforcement learning in non-Markovian domains
From MaRDI portal
Publication:1699932
DOI10.1007/s10479-016-2366-2zbMath1423.68386OpenAlexW2549405740MaRDI QIDQ1699932
Eiji Mizutani, Stuart E. Dreyfus
Publication date: 26 February 2018
Published in: Annals of Operations Research (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10479-016-2366-2
Learning and adaptive systems in artificial intelligence (68T05) Neural networks for/in biological studies, artificial life and related topics (92B20)
Related Items (2)
Designing an efficient blood supply chain network in crisis: neural learning, optimization and case study ⋮ Totally model-free actor-critic recurrent neural-network reinforcement learning in non-Markovian domains
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Reinforcement learning in the brain
- The art and theory of dynamic programming
- Totally model-free actor-critic recurrent neural-network reinforcement learning in non-Markovian domains
- \({\mathcal Q}\)-learning
- Reinforcement learning of non-Markov decision processes
- Scalable and Efficient Bayes-Adaptive Reinforcement Learning Based on Monte-Carlo Tree Search
- Absolute stability of global pattern formation and parallel memory storage by competitive neural networks
- Functional Approximations and Dynamic Programming
- OnActor-Critic Algorithms
- Simulation-based optimization of Markov reward processes
- Punish/Reward: Learning with a Critic in Adaptive Threshold Systems
This page was built for publication: Totally model-free actor-critic recurrent neural-network reinforcement learning in non-Markovian domains