Estimating Divergence Functionals and the Likelihood Ratio by Convex Risk Minimization
From MaRDI portal
Publication:5281236
Abstract: We develop and analyze -estimation methods for divergence functionals and the likelihood ratios of two probability distributions. Our method is based on a non-asymptotic variational characterization of -divergences, which allows the problem of estimating divergences to be tackled via convex empirical risk optimization. The resulting estimators are simple to implement, requiring only the solution of standard convex programs. We present an analysis of consistency and convergence for these estimators. Given conditions only on the ratios of densities, we show that our estimators can achieve optimal minimax rates for the likelihood ratio and the divergence functionals in certain regimes. We derive an efficient optimization algorithm for computing our estimates, and illustrate their convergence behavior and practical viability by simulations.
Cited in
(56)- Direct density-ratio estimation with dimensionality reduction via least-squares hetero-distributional subspace search
- GAT–GMM: Generative Adversarial Training for Gaussian Mixture Models
- Variational representations of annealing paths: Bregman information under monotonic embedding
- Formulation and properties of a divergence used to compare probability measures without absolute continuity
- Direct Learning of Sparse Changes in Markov Networks by Density Ratio Estimation
- On distributionally robust extreme value analysis
- Geometry of EM and related iterative algorithms
- Solving inverse stochastic problems from discrete particle observations using the Fokker-Planck equation and physics-informed neural networks
- On the empirical estimation of integral probability metrics
- Nonparametric estimation of Kullback-Leibler divergence
- Variational representations and neural network estimation of Rényi divergences
- A Deep Generative Approach to Conditional Sampling
- Non-parametric estimation of mutual information through the entropy of the linkage
- A Monte Carlo approach to quantifying model error in Bayesian parameter estimation
- Level sets semimetrics for probability measures with applications in hypothesis testing
- Change-point detection in time-series data by relative density-ratio estimation
- Robust Actuarial Risk Analysis
- Online direct density-ratio estimation applied to inlier-based outlier detection
- Statistical analysis of distance estimators with density differences and density ratios
- Relative Density-Ratio Estimation for Robust Distribution Comparison
- Minimum Divergence, Generalized Empirical Likelihoods, and Higher Order Expansions
- Statistical analysis of kernel-based least-squares density-ratio estimation
- Optimal experimental design: formulations and computations
- Convergence of latent mixing measures in finite and infinite mixture models
- Learning under nonstationarity: covariate shift and class-balance change
- scientific article; zbMATH DE number 7306898 (Why is no real title available?)
- Aggregated tests based on supremal divergence estimators for non-regular statistical models
- Smoothed noise contrastive mutual information neural estimation
- Machine learning with squared-loss mutual information
- Non-parametric two-sample tests: recent developments and prospects
- Computational complexity of kernel-based density-ratio estimation: a condition number analysis
- Bias reduction and metric learning for nearest-neighbor estimation of Kullback-Leibler divergence
- Conditional density estimation with dimensionality reduction via squared-loss conditional entropy minimization
- Semi-supervised learning of class balance under class-prior change by distribution matching
- Model Uncertainty and Correctability for Directed Graphical Models
- Probabilistic model validation for uncertain nonlinear systems
- Imitation learning as \(f\)-divergence minimization
- scientific article; zbMATH DE number 7625192 (Why is no real title available?)
- Density-difference estimation
- Data-driven spatiotemporal modeling for structural dynamics on irregular domains by stochastic dependency neural estimation
- Least-squares two-sample test
- Robust Validation: Confident Predictions Even When Distributions Shift
- Optimizing Variational Representations of Divergences and Accelerating Their Statistical Estimation
- Direct density derivative estimation
- Geometrical Insights for Implicit Generative Modeling
- Quantization and clustering with Bregman divergences
- Improving bridge estimators via \(f\)-GAN
- Variational Bayesian optimal experimental design with normalizing flows
- Constructive setting for problems of density ratio estimation
- Density-ratio matching under the Bregman divergence: a unified framework of density-ratio estimation
- Adaptive joint distribution learning
- Modern Bayesian experimental design
- Calibrated adversarial algorithms for generative modelling
- Sufficient dimension reduction via squared-loss mutual information estimation
- Stein variational gradient descent with learned direction
- Reducing the statistical error of generative adversarial networks using space-filling sampling
This page was built for publication: Estimating Divergence Functionals and the Likelihood Ratio by Convex Risk Minimization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5281236)