Efficient model-based reinforcement learning for approximate online optimal control

From MaRDI portal
Publication:340682

DOI10.1016/J.AUTOMATICA.2016.08.004zbMATH Open1348.93167arXiv1502.02609OpenAlexW1540245649MaRDI QIDQ340682FDOQ340682

Joel A. Rosenfeld, Rushikesh Kamalapurkar, Warren E. Dixon

Publication date: 14 November 2016

Published in: Automatica (Search for Journal in Brave)

Abstract: In this paper the infinite horizon optimal regulation problem is solved online for a deterministic control-affine nonlinear dynamical system using the state following (StaF) kernel method to approximate the value function. Unlike traditional methods that aim to approximate a function over a large compact set, the StaF kernel method aims to approximate a function in a small neighborhood of a state that travels within a compact set. Simulation results demonstrate that stability and approximate optimality of the control system can be achieved with significantly fewer basis functions than may be required for global approximation methods.


Full work available at URL: https://arxiv.org/abs/1502.02609





Cites Work


Cited In (11)

Uses Software






This page was built for publication: Efficient model-based reinforcement learning for approximate online optimal control

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q340682)