Understanding autoencoders with information theoretic concepts
From MaRDI portal
Publication:2185600
DOI10.1016/j.neunet.2019.05.003zbMath1458.68197DBLPjournals/nn/YuP19arXiv1804.00057OpenAlexW2963386266WikidataQ92318053 ScholiaQ92318053MaRDI QIDQ2185600
Publication date: 5 June 2020
Published in: Neural Networks (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1804.00057
Related Items (2)
QuantNet: transferring learning across trading strategies ⋮ Echo state network with a global reversible autoencoder for time series classification
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Gauss and the invention of least squares
- Why does deep and cheap learning work so well?
- Intrinsic dimension estimation: advances and open problems
- A scale-based approach to finding effective dimensionality in manifold learning
- On the notion(s) of duality for Markov processes
- A class of measures of informativity of observation channels
- DANCo: an intrinsic dimensionality estimator exploiting angle and norm concentration
- Trace optimization and eigenproblems in dimension reduction methods
- Measures of Entropy From Data Using Infinitely Divisible Kernels
- Reducing the Dimensionality of Data with Neural Networks
- Markov Chains
- A Bayesian Analysis of Self-Organizing Maps
- Stable Takens' Embeddings for Linear Dynamical Systems
- Estimation of Entropy and Mutual Information
- Information Theoretic Learning
- Data Processing Theorems and the Second Law of Thermodynamics
- Infinitely Divisible Matrices
- On Estimation of a Probability Density Function and Mode
This page was built for publication: Understanding autoencoders with information theoretic concepts