FFT-based exponentially weighted recursive least squares computations (Q1368766)
From MaRDI portal
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | FFT-based exponentially weighted recursive least squares computations |
scientific article |
Statements
FFT-based exponentially weighted recursive least squares computations (English)
0 references
6 May 1998
0 references
The underlying problem of this paper is that of recursive least squares (RLS) approximation posed as follows: Given a real \(p\)-by-\(n\) Toeplitz data matrix \(X(t)\) with full column rank and a \(p\)-vector \(d(t)\) at time step \(t\), find an \(n\)-vector \(w(t)\) that minimizes \(|d(t)- X(t)w(t)|^2_2\). This problem is solved recursively by solving the normal equations \[ X(t)^T X(t)w(t)= X(t)^T d(t) \] by the preconditioned conjugate gradient (PCG) method with an FFT-based preconditioner. This approach is also employed in the case where the data matrix \(X(t)\) and the data vector \(d(t)\) are defined recursively by \[ X(t)= {\sqrt\gamma X(t-1)\choose x^T(t)},\quad d(t)= {\sqrt\gamma d(t-1)\choose d_t}, \] where \(\gamma\) \((0<\gamma\leq 1)\) is the forgetting factor, and \[ x^T(t)= (x_t, x_{t-1},\dots, x_{t-n+1}) \] with \(X(1)= x^T(1)\) and \(d(1)= d_1\). In a theoretical analysis it is shown that this method is an efficient algorithm for exponentially weighted RLS computations. The paper closes with numerical examples.
0 references
recursive least squares approximation
0 references
preconditioned conjugate gradient method
0 references
FFT-based preconditioner
0 references
algorithm
0 references
exponentially weighted RLS computations
0 references
numerical examples
0 references