Numerical solution of \(AXB=C\) for \((R,S)\)-symmetric matrices (Q2511411)
From MaRDI portal
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | Numerical solution of \(AXB=C\) for \((R,S)\)-symmetric matrices |
scientific article |
Statements
Numerical solution of \(AXB=C\) for \((R,S)\)-symmetric matrices (English)
0 references
5 August 2014
0 references
The paper focuses on the numerical solution of the more general matrix equation \[ A X B=C\tag{1} \] for \((R, S)\)-symmetric matrices \(X,\) proposing two iterative algorithms based on the idea of the classical conjugate gradient method (CG) and conjugate gradient least squares method (CGLS). The first part is an introduction in the nature of the subject. The second part presents from the point of view of algorithms and convergence the classical CG method and the CGLS method. In the third part, starting from the idea of the CG method, the authors propose an iterative algorithm to solve the matrix equation (1) with \((R, S)\)-symmetric \(X\). The data matrices \(A,\) \(B\) and \(C\) are often perturbed by lots of means (observational error, model error, rounding error etc.) The changed \(A,\) \(B\) and \(C\) do not meet the solvability conditions, which makes (1) inconsistent. With this algorithm the solvability of the above equation can be determined automatically. For any (special) initial \((R, S)\)-symmetric matrix \(X_0\), the presented algorithm will automatically terminate if \(A X B=C\) is inconsistent over \((R, S)\)-symmetric matrices, while if the problem is consistent then a required solution can be obtained within finitely many steps in the absence of roundoff errors. In the fourth part, an algorithm is presented which is applied when the matrix equation is inconsistent. Thus for any initial \((R, S)\)-symmetric matrix \(X_0,\) a least squares solution can be obtained with finitely many iteration steps in the absence of roundoff errors. The authors also prove that this algorithm satisfies a minimization property, which ensures that this method possesses a smooth convergence. The fifth part analyses the solvability of the related optimal approximation problem. The sixth part presents some numerical examples in order to illustrate the efficiency of the proposed iteration algorithms. Combined with numerical examples, one gives a perturbation analysis on the approximation problem, and shows that the algorithms is numerical stable associated with the approximation problem. The main conclusions are within the last part.
0 references
inverse problems
0 references
\((R,S)\)-symmetric matrix
0 references
iterative method
0 references
structural dynamic model updating
0 references
perturbation analysis
0 references
matrix equation
0 references
algorithm
0 references
conjugate gradient method
0 references
conjugate gradient least squares method
0 references
numerical examples
0 references
0 references
0 references
0 references
0 references
0 references
0 references