Variational Encoders and Autoencoders : Information-theoretic Inference and Closed-form Solutions

From MaRDI portal
Publication:6359178

arXiv2101.11428MaRDI QIDQ6359178FDOQ6359178


Authors: Karthik Duraisamy Edit this on Wikidata


Publication date: 27 January 2021

Abstract: This work develops problem statements related to encoders and autoencoders with the goal of elucidating variational formulations and establishing clear connections to information-theoretic concepts. Specifically, four problems with varying levels of input are considered : a) The data, likelihood and prior distributions are given, b) The data and likelihood are given; c) The data and prior are given; d) the data and the dimensionality of the parameters is specified. The first two problems seek encoders (or the posterior) and the latter two seek autoencoders (i.e. the posterior and the likelihood). A variational Bayesian setting is pursued, and detailed derivations are provided for the resulting optimization problem. Following this, a linear Gaussian setting is adopted, and closed form solutions are derived. Numerical experiments are also performed to verify expected behavior and assess convergence properties. Explicit connections are made to rate-distortion theory, information bottleneck theory, and the related concept of sufficiency of statistics is also explored. One of the motivations of this work is to present the theory and learning dynamics associated with variational inference and autoencoders, and to expose information theoretic concepts from a computational science perspective.













This page was built for publication: Variational Encoders and Autoencoders : Information-theoretic Inference and Closed-form Solutions

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6359178)