Bayes and maximum likelihood for L^1-Wasserstein deconvolution of Laplace mixtures
From MaRDI portal
Publication:1663617
DOI10.1007/S10260-017-0400-4zbMATH Open1396.62076arXiv1708.05445OpenAlexW2747769079MaRDI QIDQ1663617FDOQ1663617
Authors: C. Scricciolo
Publication date: 21 August 2018
Published in: Statistical Methods and Applications (Search for Journal in Brave)
Abstract: We consider the problem of recovering a distribution function on the real line from observations additively contaminated with errors following the standard Laplace distribution. Assuming that the latent distribution is completely unknown leads to a nonparametric deconvolution problem. We begin by studying the rates of convergence relative to the -norm and the Hellinger metric for the direct problem of estimating the sampling density, which is a mixture of Laplace densities with a possibly unbounded set of locations: the rate of convergence for the Bayes' density estimator corresponding to a Dirichlet process prior over the space of all mixing distributions on the real line matches, up to a logarithmic factor, with the rate for the maximum likelihood estimator. Then, appealing to an inversion inequality translating the -norm and the Hellinger distance between general kernel mixtures, with a kernel density having polynomially decaying Fourier transform, into any -Wasserstein distance, , between the corresponding mixing distributions, provided their Laplace transforms are finite in some neighborhood of zero, we derive the rates of convergence in the -Wasserstein metric for the Bayes' and maximum likelihood estimators of the mixing distribution. Merging in the -Wasserstein distance between Bayes and maximum likelihood follows as a by-product, along with an assessment on the stochastic order of the discrepancy between the two estimation procedures.
Full work available at URL: https://arxiv.org/abs/1708.05445
Recommendations
- Posterior contraction rates for deconvolution of Dirichlet-Laplace mixtures
- Improved rates for Wasserstein deconvolution with ordinary smooth error in dimension one
- Bayesian Kantorovich deconvolution in finite mixture models
- Minimax rates of convergence for Wasserstein deconvolution with supersmooth errors in any dimension
- A deconvolution path for mixtures
entropymaximum likelihoodrate of convergenceWasserstein distancedeconvolutionDirichlet processHellinger distanceposterior distributionsieveLaplace mixture
Cites Work
- Entropies and rates of convergence for maximum likelihood and Bayes estimation for mixtures of normal densities.
- On Estimation of a Probability Density Function and Mode
- Convergence rates of posterior distributions for non iid observations
- On the consistency of Bayes estimates
- Convergence rates of posterior distributions.
- Title not available (Why is that?)
- Title not available (Why is that?)
- On the optimal rates of convergence for nonparametric deconvolution problems
- Convergence of estimates under dimensionality restrictions
- Title not available (Why is that?)
- Convergence of latent mixing measures in finite and infinite mixture models
- On a class of Bayesian nonparametric estimates: I. Density estimates
- A note on Linnik's distribution
- Bayesian nonparametrics
- Posterior rates of convergence for Dirichlet mixtures of exponential power densities
- Posterior concentration rates for empirical Bayes procedures with applications to Dirichlet process mixtures
- Posterior contraction rates for deconvolution of Dirichlet-Laplace mixtures
- On the Estimation of the Probability Density, I
- Posterior convergence rates of Dirichlet mixtures at smooth densities
- Introduction to nonparametric estimation
- Deconvolution problems in nonparametric statistics
- The tails of probabilities chosen from a Dirichlet prior
- Probability inequalities for likelihood ratios and convergence rates of sieve MLEs
- Rates of contraction for posterior distributions in \(L^{r}\)-metrics, \(1 \leq r \leq \infty\)
- Rates of convergence for minimum contrast estimators
- Hellinger-consistency of certain nonparametric maximum likelihood estimators
- Adaptive Bayesian density estimation in \(L^p\)-metrics with Pitman-Yor or normalized inverse-Gaussian process kernel mixtures
- Title not available (Why is that?)
- Title not available (Why is that?)
- Asymptotic normality in mixture models
- On Rates of Convergence for Bayesian Density Estimation
- Mean integrated square error properties of density estimates
- Estimation of distributions, moments and quantiles in deconvolution problems
- Convergence rates for Bayesian density estimation of infinite-dimensional exponential families
- Rates of convergence for the maximum likelihood estimator in mixture models
- Improved rates for Wasserstein deconvolution with ordinary smooth error in dimension one
Cited In (5)
- Minimax rates of convergence for Wasserstein deconvolution with supersmooth errors in any dimension
- Posterior contraction rates for deconvolution of Dirichlet-Laplace mixtures
- Wasserstein convergence in Bayesian and frequentist deconvolution models
- Improved rates for Wasserstein deconvolution with ordinary smooth error in dimension one
- Bayesian Kantorovich deconvolution in finite mixture models
This page was built for publication: Bayes and maximum likelihood for \(L^1\)-Wasserstein deconvolution of Laplace mixtures
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q1663617)