Optimal Feedback Law Recovery by Gradient-Augmented Sparse Polynomial Regression
From MaRDI portal
Publication:6345409
arXiv2007.09753MaRDI QIDQ6345409FDOQ6345409
Authors: Behzad Azmi, Dante Kalise, Karl Kunisch
Publication date: 19 July 2020
Abstract: A sparse regression approach for the computation of high-dimensional optimal feedback laws arising in deterministic nonlinear control is proposed. The approach exploits the control-theoretical link between Hamilton-Jacobi-Bellman PDEs characterizing the value function of the optimal control problems, and first-order optimality conditions via Pontryagin's Maximum Principle. The latter is used as a representation formula to recover the value function and its gradient at arbitrary points in the space-time domain through the solution of a two-point boundary value problem. After generating a dataset consisting of different state-value pairs, a hyperbolic cross polynomial model for the value function is fitted using a LASSO regression. An extended set of low and high-dimensional numerical tests in nonlinear optimal control reveal that enriching the dataset with gradient information reduces the number of training samples, and that the sparse polynomial regression consistently yields a feedback law of lower complexity.
Numerical optimization and variational techniques (65K10) Fuzziness, and linear inference and regression (62J86) Feedback control (93B52) Control/observation systems governed by ordinary differential equations (93C15) Control/observation systems governed by partial differential equations (93C20)
This page was built for publication: Optimal Feedback Law Recovery by Gradient-Augmented Sparse Polynomial Regression
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6345409)