Log-determinant divergences revisited: alpha-beta and gamma log-det divergences
From MaRDI portal
Publication:296329
DOI10.3390/e17052988zbMath1338.94034arXiv1412.7146OpenAlexW2963959237WikidataQ60486507 ScholiaQ60486507MaRDI QIDQ296329
Andrzej Cichocki, Sergio Cruces, Shun-ichi Amari
Publication date: 15 June 2016
Published in: Entropy (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1412.7146
Related Items (12)
Least informative distributions in maximum \(q\)-log-likelihood estimation ⋮ Entropy-regularized 2-Wasserstein distance between Gaussian measures ⋮ Geometry-aware principal component analysis for symmetric positive definite matrices ⋮ Entropic regularization of Wasserstein distance between infinite-dimensional Gaussian measures and Gaussian processes ⋮ Infinite-dimensional log-determinant divergences between positive definite Hilbert-Schmidt operators ⋮ Nonnegative Matrix Factorization and Log-Determinant Divergences ⋮ Quantum divergences with \(p\)-power means ⋮ Averaging Symmetric Positive-Definite Matrices ⋮ Alpha-beta log-determinant divergences between positive definite trace class operators ⋮ Alpha Procrustes metrics between positive definite operators: a unifying formulation for the Bures-Wasserstein and Log-Euclidean/Log-Hilbert-Schmidt metrics ⋮ Unnamed Item ⋮ Unnamed Item
Cites Work
- Entropy differential metric, distance and divergence measures in probability spaces: A unified approach
- Families of alpha-, beta- and gamma-divergences: flexible and robust measures of similarities
- The multilinear normal distribution: introduction and some basic properties
- Factorizations of invertible density matrices
- Robust parameter estimation with a small bias against heavy contamination
- Majorization, doubly stochastic matrices, and comparison of eigenvalues
- Maximum likelihood estimation for the tensor normal distribution: Algorithm, minimum sample size, and empirical bias and dispersion
- Optimal shrinkage of eigenvalues in the spiked covariance model
- Means of Hermitian positive-definite matrices based on the log-determinant \(\alpha\)-divergence function
- Positive definite matrices
- Separable covariance arrays via the Tucker product, with applications to multivariate relational data
- Positive definite matrices and the S-divergence
- Array Variate Random Variables with Multiway Kro- necker Delta Covariance Matrix Structure
- Hilbert's projective metric in quantum information theory
- Divergence Function, Duality, and Convex Analysis
- $\alpha$-Divergence Is Unique, Belonging to Both $f$-Divergence and Bregman Divergence Classes
- Optimal Shrinkage of Singular Values
- Jensen Divergence-Based Means of SPD Matrices
This page was built for publication: Log-determinant divergences revisited: alpha-beta and gamma log-det divergences