Efficient representation and approximation of model predictive control laws via deep learning
From MaRDI portal
Publication:6303602
DOI10.1109/TCYB.2020.2999556arXiv1806.10644WikidataQ96646943 ScholiaQ96646943MaRDI QIDQ6303602FDOQ6303602
Authors: S. Lucia
Publication date: 27 June 2018
Abstract: We show that artificial neural networks with rectifier units as activation functions can exactly represent the piecewise affine function that results from the formulation of model predictive control of linear time-invariant systems. The choice of deep neural networks is particularly interesting as they can represent exponentially many more affine regions compared to networks with only one hidden layer. We provide theoretical bounds on the minimum number of hidden layers and neurons per layer that a neural network should have to exactly represent a given model predictive control law. The proposed approach has a strong potential as an approximation method of predictive control laws, leading to better approximation quality and significantly smaller memory requirements than previous approaches, as we illustrate via simulation examples. We also suggest different alternatives to correct or quantify the approximation error. Since the online evaluation of neural networks is extremely simple, the approximated controllers can be deployed on low-power embedded devices with small storage capacity, enabling the implementation of advanced decision-making strategies for complex cyber-physical systems with limited computing capabilities.
This page was built for publication: Efficient representation and approximation of model predictive control laws via deep learning
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6303602)