Construction of approximation spaces for reinforcement learning
zbMATH Open1317.68143MaRDI QIDQ2933876FDOQ2933876
Authors: Wendelin Böhmer, Steffen Grünewälder, Yun Shen, Marek Musial, Klaus Obermayer
Publication date: 8 December 2014
Full work available at URL: http://jmlr.csail.mit.edu/papers/v14/boehmer13a.html
Recommendations
learningleast-squares policy iterationdiffusion distanceslow feature analysisproto value functionsvisual robot navigation
Factor analysis and principal components; correspondence analysis (62H25) Learning and adaptive systems in artificial intelligence (68T05)
Cited In (8)
- Sparse approximations to value functions in reinforcement learning
- Attainability of boundary points under reinforcement learning
- Unsupervised basis function adaptation for reinforcement learning
- Non-parametric policy search with limited information loss
- AI 2005: Advances in Artificial Intelligence
- Title not available (Why is that?)
- Algorithm of stable state spaces in reinforcement learning
- Policy iteration reinforcement learning based on geodesic Gaussian basis defined on a state-action graph
This page was built for publication: Construction of approximation spaces for reinforcement learning
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2933876)