Is there an analog of Nesterov acceleration for gradient-based MCMC? (Q2040101): Difference between revisions

From MaRDI portal
RedirectionBot (talk | contribs)
Changed an Item
Set OpenAlex properties.
 
(3 intermediate revisions by 3 users not shown)
Property / MaRDI profile type
 
Property / MaRDI profile type: MaRDI publication profile / rank
 
Normal rank
Property / arXiv ID
 
Property / arXiv ID: 1902.00996 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q5441008 / rank
 
Normal rank
Property / cites work
 
Property / cites work: The Zig-Zag Process and Super-Efficient Sampling for Bayesian Analysis of Big Data / rank
 
Normal rank
Property / cites work
 
Property / cites work: Coupling and convergence for Hamiltonian Monte Carlo / rank
 
Normal rank
Property / cites work
 
Property / cites work: The Bouncy Particle Sampler: A Non-Reversible Rejection-Free Markov Chain Monte Carlo Method / rank
 
Normal rank
Property / cites work
 
Property / cites work: Exponential Convergence to Equilibrium for Kinetic Fokker-Planck Equations / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q4617601 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Theoretical Guarantees for Approximate Sampling from Smooth and Log-Concave Densities / rank
 
Normal rank
Property / cites work
 
Property / cites work: User-friendly guarantees for the Langevin Monte Carlo with inaccurate gradient / rank
 
Normal rank
Property / cites work
 
Property / cites work: On sampling from a log-concave density using kinetic Langevin diffusions / rank
 
Normal rank
Property / cites work
 
Property / cites work: Conservative-dissipative approximation schemes for a generalized Kramers equation / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q5381127 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Nonasymptotic convergence analysis for the unadjusted Langevin algorithm / rank
 
Normal rank
Property / cites work
 
Property / cites work: High-dimensional Bayesian inference via the unadjusted Langevin algorithm / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q5214293 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Riemann Manifold Langevin and Hamiltonian Monte Carlo Methods / rank
 
Normal rank
Property / cites work
 
Property / cites work: Logarithmic Sobolev Inequalities / rank
 
Normal rank
Property / cites work
 
Property / cites work: A variational principle for the Kramers equation with unbounded external forces / rank
 
Normal rank
Property / cites work
 
Property / cites work: The Variational Formulation of the Fokker--Planck Equation / rank
 
Normal rank
Property / cites work
 
Property / cites work: Acceleration of convergence to equilibrium in Markov chains by breaking detailed balance / rank
 
Normal rank
Property / cites work
 
Property / cites work: Adaptive Thermostats for Noisy Gradient Systems / rank
 
Normal rank
Property / cites work
 
Property / cites work: Sampling can be faster than optimization / rank
 
Normal rank
Property / cites work
 
Property / cites work: Irreversible samplers from jump and continuous Markov processes / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q3172405 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q3967358 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Introductory lectures on convex optimization. A basic course. / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q3320132 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Adaptive restart for accelerated gradient schemes / rank
 
Normal rank
Property / cites work
 
Property / cites work: Generalization of an inequality by Talagrand and links with the logarithmic Sobolev inequality / rank
 
Normal rank
Property / cites work
 
Property / cites work: A function space HMC algorithm with second order Langevin diffusion limit / rank
 
Normal rank
Property / cites work
 
Property / cites work: Convex functions on non-convex domains / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q5593503 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Improving the convergence of reversible samplers / rank
 
Normal rank
Property / cites work
 
Property / cites work: Langevin diffusions and Metropolis-Hastings algorithms / rank
 
Normal rank
Property / cites work
 
Property / cites work: Hypocoercivity / rank
 
Normal rank
Property / cites work
 
Property / cites work: Optimal Transport / rank
 
Normal rank
Property / cites work
 
Property / cites work: A variational perspective on accelerated methods in optimization / rank
 
Normal rank
Property / cites work
 
Property / cites work: Extension of Convex Function / rank
 
Normal rank
Property / OpenAlex ID
 
Property / OpenAlex ID: W3163522496 / rank
 
Normal rank

Latest revision as of 10:27, 30 July 2024

scientific article
Language Label Description Also known as
English
Is there an analog of Nesterov acceleration for gradient-based MCMC?
scientific article

    Statements

    Is there an analog of Nesterov acceleration for gradient-based MCMC? (English)
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    9 July 2021
    0 references
    In continuous optimization problems, Nesterov's accelerated gradient descent method accelerates the convergence speed of the simple gradient descent method. This paper considers an analogue of such an acceleration method in Markov chain Monte Carlo (MCMC) methods, which in general have a major issue in convergence speed. As in the accelerated gradient descent for optimization, by introducing a momentum variable, the authors derive the stochastic differential equation for the accelerated dynamics in the space of probability distributions with the Kullback-Leibler divergence to the target distribution as the objective functional. Analyzing the convergence rate of the continuous accelerated dynamics and the discretization error, the authors show that the derived accelerated sampling algorithm improves the dimension and accuracy dependence of the unadjusted Langevin algorithm, which corresponds to the unadjusted gradient decent optimization. Thus, this paper provides a solid theoretical foundation for practical accelerated MCMC methods.
    0 references
    0 references
    0 references
    0 references
    0 references
    accelerated gradient descent
    0 references
    Langevin Monte Carlo
    0 references
    Markov chain Monte Carlo (MCMC)
    0 references
    sampling algorithms
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references