Laplace's method revisited: Weak convergence of probability measures
From MaRDI portal
Publication:1148601
DOI10.1214/AOP/1176994579zbMATH Open0452.60007OpenAlexW2003747360MaRDI QIDQ1148601FDOQ1148601
Authors: Chii-Ruey Hwang
Publication date: 1980
Published in: The Annals of Probability (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1214/aop/1176994579
weak convergenceLaplace's methodenergy functionsmooth manifoldweak convergence of probability measures
Cited In (51)
- Optimization by linear kinetic equations and mean-field Langevin dynamics
- A stochastic algorithm finding generalized means on compact manifolds
- Random tunneling by means of acceptance-rejection sampling for global optimization
- B-DeepONet: an enhanced Bayesian deeponet for solving noisy parametric PDEs using accelerated replica exchange SGLD
- A characterization of probabilities with full support and the Laplace method
- A criterion on a repeller being a null set of any limit measure for stochastic differential equations
- Annealing diffusions in a potential function with a slow growth
- Logarithmic Sobolev inequalities and Langevin algorithms inRn
- Linearly constrained global optimization and stochastic differential equations
- Simulated annealing type algorithms for multivariate optimization
- Simulated annealing with a potential function with discontinuous gradient on \(\mathbb R^d\)
- Convergence of Langevin-simulated annealing algorithms with multiplicative noise. II: Total variation
- Oscillation of metropolis-Hastings and simulated annealing algorithms around LASSO estimator
- Convergence analysis of a global optimization algorithm using stochastic differential equations
- Mean-field Langevin dynamics and energy landscape of neural networks
- Gibbs measures asymptotics
- Wasserstein convergence rates of increasingly concentrating probability measures
- On the Generalized Langevin Equation for Simulated Annealing
- Convergence rates of Gibbs measures with degenerate minimum
- Stochastic gradient Hamiltonian Monte Carlo for non-convex learning
- Zero white noise limit through Dirichlet forms, with application to diffusions in a random medium
- Global Optimization via Schrödinger–Föllmer Diffusion
- A study of subadmissible simulated annealing algorithms
- Multi-index antithetic stochastic gradient algorithm
- Simulated annealing with time-dependent energy function
- On limiting behavior of stationary measures for stochastic evolution systems with small noise intensity
- An improved annealing method and its large-time behavior
- Taming Neural Networks with TUSLA: Nonconvex Learning via Adaptive Stochastic Gradient Langevin Algorithms
- Convergence rates for annealing diffusion processes
- Thermalisation for small random perturbations of dynamical systems
- Kinetic Langevin MCMC sampling without gradient Lipschitz continuity -- the strongly convex case
- A geometric Laplace method
- Weak convergence rates for stochastic approximation with application to multiple targets and simulated annealing
- Simultaneous small noise limit for singularly perturbed slow-fast coupled diffusions
- Large-time behavior of perturbed diffusion Markov processes with applications to the second eigenvalue problem for Fokker-Planck operators and simulated annealing
- Simulated annealing for constrained global optimization
- Simulation-based Bayesian optimal design of aircraft trajectories for air traffic management
- Softening bilevel problems via two-scale Gibbs measures
- A smoothing algorithm for finite min-max-min problems
- Nonasymptotic estimates for stochastic gradient Langevin dynamics under local conditions in nonconvex optimization
- Large deviation principle in discrete time nonlinear filtering
- Convergence in distribution of some self-interacting diffusions
- Recent progress on the small parameter exit problem†
- Approximations of the sum of states by Laplace's method for a system of particles with a finite number of energy levels and application to limit theorems
- On the local Lipschitz stability of Bayesian inverse problems
- Non-asymptotic convergence bounds for modified tamed unadjusted Langevin algorithm in non-convex setting
- A natural order in dynamical systems based on Conley-Markov matrices
- On limit measures and their supports for stochastic ordinary differential equations
- Stochastic stability of measures in gradient systems
- Limit behavior of the invariant measure for Langevin dynamics
- A Laplace method for under-determined Bayesian optimal experimental designs
This page was built for publication: Laplace's method revisited: Weak convergence of probability measures
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q1148601)