Two numerical methods for optimizing matrix stability (Q1611899): Difference between revisions
From MaRDI portal
Latest revision as of 15:13, 4 June 2024
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | Two numerical methods for optimizing matrix stability |
scientific article |
Statements
Two numerical methods for optimizing matrix stability (English)
0 references
28 August 2002
0 references
The authors consider the affine matrix family \(A(x)=A_{0}+ \sum_{k=1}^{m}x_{k}A_{k}\) where \(x\in R^{m}\) and \(A_{k}\) are \( n\times n\) matrices and the question is to minimize its spectral abscissa, i.e. the largest real part of the eigenvalues. This is a common problem in control theory, which usually arises in the stabilization of dynamical systems by output feedback. Here two methods and their algorithms are proposed: the first is to find a local minimizer by a direction search and is based on the random gradient bundle method; The second method, the robust spectral abscissa, is based on a bilinear matrix inequality and looks at the largest eigenvalue as a functional over the set of positive definite matrices. The algorithms associated with both methods, which compute local minimizers, are described: these are the random gradient bundle with the line search and the Newton barrier method. Both algorithms are implemented in the numerical examples section.
0 references
spectral abscissa
0 references
robust spectral abscissa
0 references
optimal stability
0 references
eigenvalues
0 references
stabilization
0 references
dynamical systems
0 references
output feedback
0 references
algorithms
0 references
random gradient bundle method
0 references
matrix inequality
0 references
Newton barrier method
0 references
numerical examples
0 references
0 references
0 references
0 references