Riemannian metrics for neural networks I: feedforward networks
From MaRDI portal
Publication:4602826
DOI10.1093/imaiai/iav006zbMath1380.68337arXiv1303.0818OpenAlexW2962913334WikidataQ115276273 ScholiaQ115276273MaRDI QIDQ4602826
Publication date: 7 February 2018
Published in: Information and Inference (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1303.0818
Learning and adaptive systems in artificial intelligence (68T05) Measures of information, entropy (94A17) Local Riemannian geometry (53B20)
Related Items
On the locality of the natural gradient for learning in deep Bayesian networks, Invariance properties of the natural gradient in overparametrised systems, Efficient approximations of the fisher matrix in neural networks using kronecker product singular value decomposition, Dynamics of Learning in MLP: Natural Gradient and Singularity Revisited, Adaptive Learning Algorithm Convergence in Passive and Reactive Environments, Online natural gradient as a Kalman filter, Stochastic sub-sampled Newton method with variance reduction, Unnamed Item, Universal statistics of Fisher information in deep neural networks: mean field approach*, Understanding approximate Fisher information for fast convergence of natural gradient descent in wide neural networks*, Wasserstein proximal of GANs, Parametrisation independence of the natural gradient in overparametrised systems