Optimal control problem for the Lyapunov exponents of random matrix products (Q1584024): Difference between revisions
From MaRDI portal
Set profile property. |
ReferenceBot (talk | contribs) Changed an Item |
||
Property / cites work | |||
Property / cites work: Random matrix products and measures on projective spaces / rank | |||
Normal rank | |||
Property / cites work | |||
Property / cites work: relations between the sample and moment lyapunov exponents / rank | |||
Normal rank | |||
Property / cites work | |||
Property / cites work: The Existence of a Minimum Pair of State and Policy for Markov Decision Processes under the Hypothesis of Doeblin / rank | |||
Normal rank | |||
Property / cites work | |||
Property / cites work: Stochastic stability and control / rank | |||
Normal rank |
Latest revision as of 16:02, 30 May 2024
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | Optimal control problem for the Lyapunov exponents of random matrix products |
scientific article |
Statements
Optimal control problem for the Lyapunov exponents of random matrix products (English)
0 references
9 May 2001
0 references
The author studies an optimal problem in which the objective function is the essential supremum of the Lyapunov exponents for a dynamical system described by random matrix products when these matrices depend on a controlled Markov process \((\xi_n)\) with values in a finite or countable set \(I\). \((\xi_n)\) has transition probability \(P(a)= (P_{ij}(a):i, j\in I)\), with a control parameter \(a\). For any admissible control \((u_t)\) the \(\mathbb{R}^d\)-valued random variables \((X_n:n= 0,1,\dots)\) are given by the difference equation: \[ X_{n+1}= M(\xi_{n+1}, Y_{n+1}) X_n,\quad X_0= x\in\mathbb{R}^d,\tag{1} \] where \((Y_n)\) are i.i.d. random variables and \(M(i,y)\) are invertible \(d\times d\) matrices. \(X^u_n(x)\) is the solution of (1) associated with the control \((u_t)\). The process \((u_t)\) affects the solutions through \(P(a)\). Some variants of the Lyapunov exponent are used. A decision \(\pi_t\) at a time \(t\) is a stochastic kernel, and a sequence of decisions is called a policy. The Markov stationary policy is defined. The main result: If there exists a Markov policy such that some condition is satisfied, then there exists a stationary policy which minimizes the Lyapunov exponent of solutions of (1). In this case the spectrum of the system (1) consists of only one element.
0 references
random dynamical system
0 references
decision models
0 references
optimal problem
0 references
Lyapunov exponents
0 references
random matrix products
0 references
controlled Markov process
0 references
stationary policy
0 references
0 references