Generalized twin Gaussian processes using Sharma-Mittal divergence

From MaRDI portal
Publication:747261

DOI10.1007/S10994-015-5497-9zbMATH Open1341.62203arXiv1409.7480OpenAlexW1576564833MaRDI QIDQ747261FDOQ747261


Authors: Mohamed Elhoseiny, Ahmed Elgammal Edit this on Wikidata


Publication date: 23 October 2015

Published in: Machine Learning (Search for Journal in Brave)

Abstract: There has been a growing interest in mutual information measures due to their wide range of applications in Machine Learning and Computer Vision. In this paper, we present a generalized structured regression framework based on Shama-Mittal divergence, a relative entropy measure, which is introduced to the Machine Learning community in this work. Sharma-Mittal (SM) divergence is a generalized mutual information measure for the widely used R'enyi, Tsallis, Bhattacharyya, and Kullback-Leibler (KL) relative entropies. Specifically, we study Sharma-Mittal divergence as a cost function in the context of the Twin Gaussian Processes (TGP)~citep{Bo:2010}, which generalizes over the KL-divergence without computational penalty. We show interesting properties of Sharma-Mittal TGP (SMTGP) through a theoretical analysis, which covers missing insights in the traditional TGP formulation. However, we generalize this theory based on SM-divergence instead of KL-divergence which is a special case. Experimentally, we evaluated the proposed SMTGP framework on several datasets. The results show that SMTGP reaches better predictions than KL-based TGP, since it offers a bigger class of models through its parameters that we learn from the data.


Full work available at URL: https://arxiv.org/abs/1409.7480




Recommendations




Cites Work


Uses Software





This page was built for publication: Generalized twin Gaussian processes using Sharma-Mittal divergence

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q747261)