Efficient Bayesian Experimentation Using an Expected Information Gain Lower Bound
DOI10.1137/15M1043303zbMATH Open1370.62019arXiv1506.00053MaRDI QIDQ5269853FDOQ5269853
R. G. Ghanem, Paris Hajali, Panagiotis Tsilifis
Publication date: 28 June 2017
Published in: SIAM/ASA Journal on Uncertainty Quantification (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1506.00053
Monte Carlo samplingstochastic optimizationpolynomial chaosBayesian experimental designtwo-phase transportexpected information gain
Bayesian inference (62F15) Optimal statistical designs (62K05) Measures of information, entropy (94A17)
Cites Work
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Title not available (Why is that?)
- Bayesian experimental design: A review
- On Information and Sufficiency
- An adaptive Metropolis algorithm
- Monte Carlo sampling methods using Markov chains and their applications
- A Stochastic Approximation Method
- Equation of State Calculations by Fast Computing Machines
- Inverse problems: a Bayesian perspective
- The Wiener--Askey Polynomial Chaos for Stochastic Differential Equations
- On a Measure of the Information Provided by an Experiment
- Modeling uncertainty in flow simulations via generalized polynomial chaos.
- On the stability and accuracy of least squares approximations
- Multivariate stochastic approximation using a simultaneous perturbation gradient approximation
- Stochastic Estimation of the Maximum of a Regression Function
- Maximum Entropy Sampling and Optimal Bayesian Experimental Design
- Compressive sampling of polynomial chaos expansions: convergence analysis and sampling strategies
- Physical Systems with Random Uncertainties: Chaos Representations with Arbitrary Probability Measure
- Perturbation analysis and optimization of queueing networks
- Introduction to Uncertainty Quantification
- GRADIENT-BASED STOCHASTIC OPTIMIZATION METHODS IN BAYESIAN EXPERIMENTAL DESIGN
- Fast estimation of expected information gains for Bayesian experimental designs based on Laplace approximations
Cited In (15)
- Multilevel double loop Monte Carlo and stochastic collocation methods with importance sampling for Bayesian optimal experimental design
- Compressive sensing adaptation for polynomial chaos expansions
- Parallel Simultaneous Perturbation Optimization
- An extended polynomial chaos expansion for PDF characterization and variation with aleatory and epistemic uncertainties
- Surrogate-based sequential Bayesian experimental design using non-stationary Gaussian processes
- Multimodal information gain in Bayesian design of experiments
- Adaptive method for indirect identification of the statistical properties of random fields in a Bayesian framework
- Optimal Bayesian experimental design for subsurface flow problems
- Bayesian adaptation of chaos representations using variational inference and sampling on geodesics
- Efficient D-Optimal Design of Experiments for Infinite-Dimensional Bayesian Linear Inverse Problems
- Multilevel Monte Carlo estimation of expected information gains
- Reduced Wiener chaos representation of random fields via basis adaptation and projection
- The stochastic quasi-chemical model for bacterial growth: variational Bayesian parameter update
- Optimal Bayesian experimental design for electrical impedance tomography in medical imaging
- Bayesian sequential optimal experimental design for nonlinear models using policy gradient reinforcement learning
Uses Software
This page was built for publication: Efficient Bayesian Experimentation Using an Expected Information Gain Lower Bound
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5269853)