Combinatorial methods in density estimation (Q5926856)

From MaRDI portal
scientific article; zbMATH DE number 1573292
Language Label Description Also known as
English
Combinatorial methods in density estimation
scientific article; zbMATH DE number 1573292

    Statements

    Combinatorial methods in density estimation (English)
    0 references
    0 references
    0 references
    8 March 2001
    0 references
    This carefully written monograph focuses on nonparametric estimation of a density from i.i.d. data, with the goodness-of-fit being measured in terms of the \(L^1\)-norm. An estimator which is studied throughout the whole text is the one which, from a given class \(\{ f_{\theta}\}\) of candidates, picks that one which minimizes the sup-distance between the empirical measure \(\mu_n(A)\) and \(\int_A f_{\theta}\). Here \(A\) runs through all sets of the type \(\{ f_{\theta} > f_{\theta'}\}\). It may then be shown that the \(L^1\)-distance between the resulting estimator \(f_n\) and the true but unknown density \(f\) is bounded from above by, among other things, the empirical discrepancy over the above class of sets \(\{ A\}\). At this stage of the argument some combinatorial tools are needed, which explains the title of the book. Most of the text is concerned with upper bounds for the expectation of \(\int |f_n-f|\). Distributional convergence, e.g., which is perhaps more important in practical applications, is not dealt with. Other approaches to density estimation are, if at all, only mentioned briefly. Similarly, the excercises and list of references reflect the authors' personal interest in their approach. Overall the book has 17 chapters. After a brief Introduction to the subject, Chapter 2 presents a concise treatment of (exponential) concentration inequalities. In Chapter 3 they are applied to maximal deviations of empirical measures. Chapter 4 provides some by now classical tools from combinatorial Vapnik Chervonenkis theory useful to yield bounds on empirical discrepancies. In Chapter 5, some results on the total variation distance between two measures are provided. Chapters 6, 7 and 8 present the general idea behind the estimators, namely choosing \(f_n\) from a class of competitors such that a certain empirical distance is minimal. In the following, applications to kernel, general additive or wavelet estimators are discussed, among others. Finally, in Chapter 15, some minimax theory is provided. As already mentioned, in each case the focus is on the expected \(L^1\)-distance between \(f_n\) and \(f\). The book is recommended to those who want to get an overview of the state of the art of this approach. The general style of the text is theoretical. Unfortunately, no data examples or simulation results are discussed in order to demonstrate the strengths of this methodology.
    0 references
    0 references
    L1-distance
    0 references
    minimum distance estimation
    0 references
    empirical discrepancy
    0 references
    finite sample bounds
    0 references