Stability of discrete-time linear systems with Markovian jumping parameters (Q1924413): Difference between revisions
From MaRDI portal
Latest revision as of 10:33, 30 July 2024
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | Stability of discrete-time linear systems with Markovian jumping parameters |
scientific article |
Statements
Stability of discrete-time linear systems with Markovian jumping parameters (English)
0 references
12 March 1997
0 references
The authors consider linear, discrete-time control systems, where the system and control matrices depend on the underlying Markov chain \(\xi(t)\) with finitely many states. Such a system is called `stochastically stabilizable' if there exists a linear feedback law \(u(t)=-K(\xi(t))x(t)\) and a matrix \(\widetilde{P}\) with \[ \mathbb{E}_u\Biggl( \sum^\infty_{t+o} x'(t,x_0,\xi_0,u) x(t,x_0,\xi_0,u)\Biggr)\leq x'_0\widetilde{P}x_0, \] where \(x(t,x_0,\xi_0,u)\) is the solution with initial condition \((x_0,\xi_0)\) and \(\widetilde{P}\) is positive definite. This feedback law requires the exact knowledge of the pair \((\xi(t),x(t))\) for all \(t\in\mathbb{N}\). It is proved that stochastic stabilization is equivalent to the solvability of an appropriate Riccati type equation for each Markov state \(\xi\). The solutions \(P(\xi)\) then give rise to a Lyapunov function \(x'P(\xi)x\) of the system. Two uncertainty models for the system are presented (linear perturbations satisfying matching conditions, and nonlinear bounded perturbations). The authors formulate conditions on the uncertainties such that \(x'P(\xi)x\) is still a (common) Lyapunov function, thus ensuring stability robustness. A one-dimensional example with two-state Markov chain illustrates the results.
0 references
discrete-time
0 references
Markov chain
0 references
stochastic stabilization
0 references
Lyapunov function
0 references
robustness
0 references
0 references
0 references
0 references
0 references