On local divergences between two probability measures
From MaRDI portal
Publication:263904
DOI10.1007/s00184-015-0556-6zbMath1333.62029OpenAlexW1070519977MaRDI QIDQ263904
G. Avlogiaris, Athanasios C. Micheas, Konstantinos G. Zografos
Publication date: 5 April 2016
Published in: Metrika (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s00184-015-0556-6
exponential familyKullback-Leibler divergence\(\phi\)-divergenceCressie and Read power divergencelocal divergence
Related Items (4)
On reconsidering entropies and divergences and their cumulative counterparts: Csiszár's, DPD's and Fisher's type cumulative and survival measures ⋮ On testing local hypotheses via local divergence ⋮ Some Universal Insights on Divergences for Statistics, Machine Learning and Artificial Intelligence ⋮ A criterion for local model selection
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Divergence-based estimation and testing with misclassified data
- A local spectral approach for assessing time series model misspecification
- Theory of statistical inference and information. Transl. from the Slovak by the author
- Minimum (\(h\),\(\phi\))-divergences estimators with weights
- Expressions for Rényi and Shannon entropies for multivariate distributions
- Information indices: Unification and applications.
- Entropy, divergence and distance measures with econometric applications
- On Mardia's and Song's measures of kurtosis in elliptical distributions
- On the f-divergence and singularity of probability measures
- On Divergences and Informations in Statistics and Information Theory
- Rényi Statistics in Directed Families of Exponential Experiments*
- Principal Information Theoretic Approaches
- On divergences of finite measures and their applicability in statistics and information theory
- On Information and Sufficiency
- Statistical Inference
This page was built for publication: On local divergences between two probability measures