Estimating Divergence Functionals and the Likelihood Ratio by Convex Risk Minimization

From MaRDI portal
Revision as of 21:54, 8 February 2024 by Import240129110113 (talk | contribs) (Created automatically from import240129110113)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Publication:5281236

DOI10.1109/TIT.2010.2068870zbMath1366.62071arXiv0809.0853MaRDI QIDQ5281236

Michael I. Jordan, XuanLong Nguyen, Martin J. Wainwright

Publication date: 27 July 2017

Published in: IEEE Transactions on Information Theory (Search for Journal in Brave)

Abstract: We develop and analyze $M$-estimation methods for divergence functionals and the likelihood ratios of two probability distributions. Our method is based on a non-asymptotic variational characterization of $f$-divergences, which allows the problem of estimating divergences to be tackled via convex empirical risk optimization. The resulting estimators are simple to implement, requiring only the solution of standard convex programs. We present an analysis of consistency and convergence for these estimators. Given conditions only on the ratios of densities, we show that our estimators can achieve optimal minimax rates for the likelihood ratio and the divergence functionals in certain regimes. We derive an efficient optimization algorithm for computing our estimates, and illustrate their convergence behavior and practical viability by simulations.


Full work available at URL: https://arxiv.org/abs/0809.0853






Related Items (55)

GAT–GMM: Generative Adversarial Training for Gaussian Mixture ModelsNon-parametric estimation of mutual information through the entropy of the linkageA Monte Carlo approach to quantifying model error in Bayesian parameter estimationStatistical analysis of distance estimators with density differences and density ratiosModel Uncertainty and Correctability for Directed Graphical ModelsSufficient Dimension Reduction via Squared-Loss Mutual Information EstimationDensity-ratio matching under the Bregman divergence: a unified framework of density-ratio estimationSmoothed noise contrastive mutual information neural estimationData-driven spatiotemporal modeling for structural dynamics on irregular domains by stochastic dependency neural estimationA Deep Generative Approach to Conditional SamplingOn distributionally robust extreme value analysisStein variational gradient descent with learned directionGeometry of EM and related iterative algorithmsGeometrical Insights for Implicit Generative ModelingStatistical analysis of kernel-based least-squares density-ratio estimationLevel sets semimetrics for probability measures with applications in hypothesis testingAggregated tests based on supremal divergence estimators for non-regular statistical modelsComputational complexity of kernel-based density-ratio estimation: a condition number analysisOn the empirical estimation of integral probability metricsConvergence of latent mixing measures in finite and infinite mixture modelsLeast-squares two-sample testDensity-Difference EstimationConditional Density Estimation with Dimensionality Reduction via Squared-Loss Conditional Entropy MinimizationOnline Direct Density-Ratio Estimation Applied to Inlier-Based Outlier DetectionDirect Density Derivative EstimationDirect Learning of Sparse Changes in Markov Networks by Density Ratio EstimationProbabilistic model validation for uncertain nonlinear systemsNonparametric Estimation of Küllback-Leibler DivergenceSemi-supervised learning of class balance under class-prior change by distribution matchingMinimum Divergence, Generalized Empirical Likelihoods, and Higher Order ExpansionsBias Reduction and Metric Learning for Nearest-Neighbor Estimation of Kullback-Leibler DivergenceUnnamed ItemVariational Representations and Neural Network Estimation of Rényi DivergencesRobust Actuarial Risk AnalysisDirect density-ratio estimation with dimensionality reduction via least-squares hetero-distributional subspace searchQuantization and clustering with Bregman divergencesChange-point detection in time-series data by relative density-ratio estimationModern Bayesian experimental designCalibrated adversarial algorithms for generative modellingReducing the statistical error of generative adversarial networks using space-filling samplingVariational representations of annealing paths: Bregman information under monotonic embeddingOptimal experimental design: formulations and computationsLearning under nonstationarity: covariate shift and class-balance changeNon-parametric two-sample tests: recent developments and prospectsRobust Validation: Confident Predictions Even When Distributions ShiftMachine learning with squared-loss mutual informationVariational Bayesian optimal experimental design with normalizing flowsAdaptive joint distribution learningImitation Learning as f-Divergence MinimizationConstructive setting for problems of density ratio estimationRelative Density-Ratio Estimation for Robust Distribution ComparisonImproving bridge estimators via \(f\)-GANUnnamed ItemSolving Inverse Stochastic Problems from Discrete Particle Observations Using the Fokker--Planck Equation and Physics-Informed Neural NetworksFormulation and properties of a divergence used to compare probability measures without absolute continuity





This page was built for publication: Estimating Divergence Functionals and the Likelihood Ratio by Convex Risk Minimization