Learning deep linear neural networks: Riemannian gradient flows and convergence to global minimizers
From MaRDI portal
Publication:5073895
DOI10.1093/imaiai/iaaa039OpenAlexW3127516423WikidataQ115276290 ScholiaQ115276290MaRDI QIDQ5073895
Ulrich Terstiege, Holger Rauhut, Bubacarr Bah, Michael Westdickenberg
Publication date: 4 May 2022
Published in: Information and Inference: A Journal of the IMA (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1910.05505
Related Items (10)
Geometry of Linear Convolutional Networks ⋮ Global convergence of the gradient method for functions definable in o-minimal structures ⋮ Deep Linear Networks for Matrix Completion—an Infinite Depth Limit ⋮ Certifying the Absence of Spurious Local Minima at Infinity ⋮ Side effects of learning from low-dimensional data embedded in a Euclidean space ⋮ Gradient descent for deep matrix factorization: dynamics and implicit bias towards low rank ⋮ Computation and learning in high dimensions. Abstracts from the workshop held August 1--7, 2021 (hybrid meeting) ⋮ Unnamed Item ⋮ Stable recovery of entangled weights: towards robust identification of deep neural networks from minimal samples ⋮ Information theory and recovery algorithms for data fusion in Earth observation
This page was built for publication: Learning deep linear neural networks: Riemannian gradient flows and convergence to global minimizers