A generalization of the Eckart-Young-Mirsky matrix approximation theorem (Q1091453): Difference between revisions

From MaRDI portal
RedirectionBot (talk | contribs)
Removed claim: author (P16): Item:Q172224
ReferenceBot (talk | contribs)
Changed an Item
 
(3 intermediate revisions by 3 users not shown)
Property / author
 
Property / author: Alan J. Hoffman / rank
 
Normal rank
Property / MaRDI profile type
 
Property / MaRDI profile type: MaRDI publication profile / rank
 
Normal rank
Property / full work available at URL
 
Property / full work available at URL: https://doi.org/10.1016/0024-3795(87)90114-5 / rank
 
Normal rank
Property / OpenAlex ID
 
Property / OpenAlex ID: W2058512713 / rank
 
Normal rank
Property / cites work
 
Property / cites work: An Analysis of the Total Least Squares Problem / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q5185900 / rank
 
Normal rank
Property / cites work
 
Property / cites work: SYMMETRIC GAUGE FUNCTIONS AND UNITARILY INVARIANT NORMS / rank
 
Normal rank
Property / cites work
 
Property / cites work: Schur complements and statistics / rank
 
Normal rank
Property / cites work
 
Property / cites work: On the Asymptotic Behavior of Scaled Singular Value and QR Decompositions / rank
 
Normal rank
Property / cites work
 
Property / cites work: Latent Root Regression Analysis / rank
 
Normal rank

Latest revision as of 09:42, 18 June 2024

scientific article
Language Label Description Also known as
English
A generalization of the Eckart-Young-Mirsky matrix approximation theorem
scientific article

    Statements

    A generalization of the Eckart-Young-Mirsky matrix approximation theorem (English)
    0 references
    0 references
    0 references
    0 references
    1987
    0 references
    Let X be an \(n\times p\) matrix with \(n\geq p\) and let \(\| \cdot \|\) be a unitarily invariant matrix norm. Let \(X=(X_ 1,X_ 2)\) where \(X_ 1\) has k columns. The problem considered in this paper is: find a matrix \(\hat X{}_ 2\) such that \(rank(X_ 1,\hat X_ 2)\leq r\) and \[ \| (X_ 1,\hat X_ 2)-(X_ 1,X_ 2)\| =\inf_{rank(X_ 1,\bar X_ 2)\leq \quad r}\| (X_ 1,\bar X_ 2)-(X_ 1,X_ 2)\|. \] This problem was solved by \textit{C. Eckart} and \textit{G. Young} [The approximation of one matrix by another of lower rank, Psychometrika 1, 211-218 (1936)] in the case \(k=0\), for the Frobenius norm. Let \(H_ r(X)\) denote the Eckart-Young solution (with \(H_ r(X)=X\) if \(r>p)\). The authors prove the following: Theorem. Let \(X=(X_ 1,X_ 2)\) where \(X_ 1\) has k columns ad let \(\ell =rank X_ 1\). Let P denote the orthogonal projection onto the column space of X and \(P^{\perp}\) the orthogonal projection onto its orthogonal complement. If \(\ell \leq r\) then the matrix \(\hat X{}_ 2=PX_ 2+H_{r-\ell}(P^{\perp}X_ 2)\) is a solution of the problem above. A number of consequences of this theorem are considered and, in particular, applications to multiple correlations, variance inflation factors and total least squares are given.
    0 references
    matrix approximation
    0 references
    unitarily invariant matrix norm
    0 references
    rank
    0 references
    singular values
    0 references
    Frobenius norm
    0 references
    multiple correlations
    0 references
    variance inflation factors
    0 references
    total least squares
    0 references

    Identifiers