Rates of convergence for Gibbs sampling in the analysis of almost exchangeable data (Q6056577): Difference between revisions

From MaRDI portal
Importer (talk | contribs)
Created a new Item
 
ReferenceBot (talk | contribs)
Changed an Item
 
(2 intermediate revisions by 2 users not shown)
Property / OpenAlex ID
 
Property / OpenAlex ID: W3096290487 / rank
 
Normal rank
Property / cites work
 
Property / cites work: A lecture on the averaging process / rank
 
Normal rank
Property / cites work
 
Property / cites work: De Finetti priors using Markov chain Monte Carlo computations / rank
 
Normal rank
Property / cites work
 
Property / cites work: Glauber dynamics on trees and hyperbolic graphs / rank
 
Normal rank
Property / cites work
 
Property / cites work: Mixing time of the adjacent walk on the simplex / rank
 
Normal rank
Property / cites work
 
Property / cites work: Cutoff for the averaging process on the hypercube and complete bipartite graphs / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q5614192 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Fast simulation of truncated Gaussian distributions / rank
 
Normal rank
Property / cites work
 
Property / cites work: The cutoff phenomenon in finite Markov chains. / rank
 
Normal rank
Property / cites work
 
Property / cites work: Approximate exchangeability and de Finetti priors in 2022 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Gibbs sampling, exponential families and orthogonal polynomials / rank
 
Normal rank
Property / cites work
 
Property / cites work: Geometric bounds for eigenvalues of Markov chains / rank
 
Normal rank
Property / cites work
 
Property / cites work: The mixing time evolution of Glauber dynamics for the mean-field Ising model / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q5184179 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q3447279 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Stochastic Relaxation, Gibbs Distributions, and the Bayesian Restoration of Images / rank
 
Normal rank
Property / cites work
 
Property / cites work: Mixing time of an unaligned Gibbs sampler on the square / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4864274 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Honest exploration of intractable probability distributions via Markov chain Monte Carlo. / rank
 
Normal rank
Property / cites work
 
Property / cites work: Rates of convergence of some multivariate Markov chains with polynomial eigenfunctions / rank
 
Normal rank
Property / cites work
 
Property / cites work: Glauber dynamics for the mean-field Ising model: cut-off, critical power law, and metastability / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4595047 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Probability on Trees and Networks / rank
 
Normal rank
Property / cites work
 
Property / cites work: Markov chains and stochastic stability / rank
 
Normal rank
Property / cites work
 
Property / cites work: Convergence Rates for Markov Chains / rank
 
Normal rank
Property / cites work
 
Property / cites work: A Gibbs sampler on the \(n\)-simplex / rank
 
Normal rank
Property / cites work
 
Property / cites work: Improved bounds for sampling colorings / rank
 
Normal rank
links / mardi / namelinks / mardi / name
 

Latest revision as of 09:43, 3 August 2024

scientific article; zbMATH DE number 7757102
Language Label Description Also known as
English
Rates of convergence for Gibbs sampling in the analysis of almost exchangeable data
scientific article; zbMATH DE number 7757102

    Statements

    Rates of convergence for Gibbs sampling in the analysis of almost exchangeable data (English)
    0 references
    0 references
    0 references
    0 references
    30 October 2023
    0 references
    The paper analyses a Gibbs sampler for sampling \(\mathbf p = (p_1, ..., p_d) \in [0,1]^d\) from a distribution with density proportional to \[ \exp\Bigl( - A^2 \sum_{i,j \in [d] : i < j} c_{i,j} (p_i - p_j)^2 \Bigr), \] where \(A\) is large and the \(c_{i,j}\) are non-negative weights. The Gibbs sampler used is \textit{Glauber dynamics}: a single, uniformly chosen coordinate is updated in each step according to the conditional distribution given all the others. The non-negative weights can be viewed as a weighted graph. Extending these weights to a symmetric matrix \(C = (c_{i,j})_{i,j=1}^d\) with diagonal \(c_{i,i} := \sum_{j \in [d] : j \ne i} c_{i,j}\), the weights generate a reversible, discrete-time Markov chain on \([d]\): \[ \textstyle p_{i,j} = c_{i,j} / c_i, \quad\text{where}\quad c_i := \sum_{j \in [d] : j \ne i} c_{i,j}. \] It has equilibrium distribution \(c = (c_i)_{i=1}^d\). Letting \(\Delta = I - P\) denote the Laplacian of the Markov chain, \[ \textstyle \sum_{i,j \in [d] : i < j} c_{i,j} (p_i - p_j)^2 = \langle \mathbf p, \Delta \mathbf p \rangle, \quad\text{where}\quad \langle \mathbf x, \mathbf y \rangle := \sum_{i \in [d]} \pi_i x_i y_i. \] The main result analyses the mixing time of the Gibbs sampler, in the limit \(A \to \infty\) with other variables fixed. The mixing metric is the \(\ell_\infty\)-Wasserstein distance: \[ \textstyle d_\infty(\mu, \pi) := \inf_{(X, Y)} \mathbb E( \| X - Y \|_\infty ), \] where the infimum is over all couplings \((X, Y)\) of \(\mu\) and \(\pi\). For \(\varepsilon \in (0,1)\), define \[ \textstyle t_\mathsf{mix}(\varepsilon) := \inf\{ t \ge 0 \mid \sup_{\mathbf p \in [0,1]^d} d_\infty(K_t(\mathbf p), \pi_{A,C}) \le \varepsilon \}, \] where \(K_t(\mathbf p)\) are the time-\(t\) transition probabilities of the Gibbs sampler started from \(\mathbf p \in [0,1]^d\) and \(\pi_{A,C}\) is the target distribution: \[ \frac{d\pi_{A,C}}{d\mathbf p} \propto \exp\Bigl( - A^2 \sum_{i,j \in [d] : i < j} c_{i,j} (p_i - p_j)^2 \Bigr). \] In short, the mixing time is shown to be of order \(A^2\): \[ t_\mathsf{mix}(\tfrac14 - \varepsilon) \gtrsim \tfrac \lambda\gamma A^2 \quad\text{and}\quad t_\mathsf{mix}(\varepsilon) \lesssim d A^2, \] where \(\lambda\) is the spectral gap of \(\Delta\) and \(\gamma\) the absolute spectral gap. The implicit constants in \(\gtrsim\)/\(\lesssim\) depend only on \(\varepsilon\). This paper is motivated by work of de Finetti, who was particularly interested in understanding situations where \(\mathbf p\) is approximately concentrated around a main diagonal of the hypercube: \[ \mathcal D := \{ p \mathbf 1 \mid p \in [0,1] \} \subseteq [0,1]^d, \quad\text{where}\quad \mathbf 1 := (1, \ldots, 1). \] This situation is referred to as \textit{almost exchangeability}, because it captures the idea that the \(d\) variables are almost indistinguishable.
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    Gibbs sampler
    0 references
    Markov chain
    0 references
    mixing time
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references