Policy Synthesis for Switched Linear Systems With Markov Decision Process Switching
From MaRDI portal
Publication:6137536
DOI10.1109/TAC.2022.3145659arXiv2001.00835OpenAlexW4226054277MaRDI QIDQ6137536FDOQ6137536
Authors: Bo Wu, Murat Cubuktepe, Zhe Xu, Ufuk Topcu
Publication date: 4 September 2023
Published in: IEEE Transactions on Automatic Control (Search for Journal in Brave)
Abstract: We study the synthesis of mode switching protocols for a class of discrete-time switched linear systems in which the mode jumps are governed by Markov decision processes (MDPs). We call such systems MDP-JLS for brevity. Each state of the MDP corresponds to a mode in the switched system. The probabilistic state transitions in the MDP represent the mode transitions. We focus on finding a policy that selects the switching actions at each mode such that the switched system that follows these actions is guaranteed to be stable. Given a policy in the MDP, the considered MDP-JLS reduces to a Markov jump linear system (MJLS). {We consider both mean-square stability and stability with probability one. For mean-square stability, we leverage existing stability conditions for MJLSs and propose efficient semidefinite programming formulations to find a stabilizing policy in the MDP. For stability with probability one, we derive new sufficient conditions and compute a stabilizing policy using linear programming. We also extend the policy synthesis results to MDP-JLS with uncertain mode transition probabilities.
Full work available at URL: https://arxiv.org/abs/2001.00835
Cited In (2)
This page was built for publication: Policy Synthesis for Switched Linear Systems With Markov Decision Process Switching
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6137536)