Introduction to optimal estimation (Q1964258)

From MaRDI portal
Revision as of 09:12, 16 February 2024 by RedirectionBot (talk | contribs) (‎Changed an Item)
scientific article
Language Label Description Also known as
English
Introduction to optimal estimation
scientific article

    Statements

    Introduction to optimal estimation (English)
    0 references
    0 references
    0 references
    6 February 2000
    0 references
    This book grew out of a series of lecture notes into an introductory but comprehensive treatment of Kalman and Wiener filtering, together with the consideration of least-squares estimation, maximum likelihood estimation, and maximum a posteriori estimation based on discrete-time measurements. Emphasis is also placed here on the relation of these different methods to a systematic development of optimal estimation. The background needed for the reading of the book is stated as being `a standard course on probability and random variables and one or more courses on signals and systems including a development of the state space theory of linear systems'. Summaries of other material needed are provided by the authors. There are unfortunately a fair number of typographical errors (even in the Series Editors' Foreword!) that detract from the appeal of the book. These would, however, probably be correctable by the reader without much difficulty. The mathematical statistician who reads this book will be jolted by a number of things that presumably would not bother the expert in filtering theory [cf. \textit{A. H. Jazwinski}, Stochastic processes and filtering theory, Academic Press (1970; Zbl 0203.50101)]. Let me mention a few of these here: 1. the defining, on page 28, of a probability space as \(S\) rather than a triple \((S, {\mathcal A},P)\); 2. the specifying of two random variables as independent if their joint p.d.f. factorizes into the product of the marginals without noting that this is required to hold for all values of the variables; 3. the defining, on page 71, of \(\widehat s(n)\) as an unbiased estimate (of \(s(n)?)\) when \(E[s(n)-\widehat s(n)]=0\). An on this point, it is wrong to write, in (3.6), \(\lim_{n\to \infty}E [\widehat s(n)]= E[s(n)]\). The Bayesian statistician will also be surprised to find maximum a posteriori estimation carried out in \S 3.2 with no parameters, though this might be seen as sanctioned by work like \textit{S. Geisser}, ``The future of statistics in retrospect'' [Bayesian Statistics 3, 147-158 (1988; Zbl 0706.62007)], where a `Stringent Version' of Bayes's Theorem is given that is `bereft of parametric intrusions'. Perhaps more serious is the use of the Normal distribution in Example 2.3: a random variable known to take on values only in \((-1,1)\) is inappropriately described by a distribution over the whole real line. The development and treatment of the Wiener and Kalman filters is readily understandable. Questions of nonlinear filtering are also considered, and there is discussion of several practical applications -- e.g. target tracking and system identification. All-in-all the book is a useful addition to the literature on filtering theory and estimation, and could, I believe, be profitably used in a post-graduate course on these topics.
    0 references
    Wiener filtering
    0 references
    least-squares estimation
    0 references
    maximum likelihood estimation
    0 references
    maximum a posteriori estimation
    0 references
    optimal estimation
    0 references
    Kalman filters
    0 references

    Identifiers

    0 references
    0 references
    0 references
    0 references
    0 references