Landscape and training regimes in deep learning
From MaRDI portal
Publication:2231925
Recommendations
- Archetypal landscapes for deep neural networks
- Optimization Landscape of Neural Networks
- Shaping the learning landscape in neural networks around wide flat minima
- Revisiting Landscape Analysis in Deep Neural Networks: Eliminating Decreasing Paths to Infinity
- Deep learning
- Deep learning
- Learning with deep cascades
- Exploring strategies for training deep neural networks
- Disentangling feature and lazy training in deep neural networks
Cites work
- A jamming transition from under- to over-parametrization affects generalization in deep learning
- A mean field view of the landscape of two-layer neural networks
- Asymptotic learning curves of kernel methods: empirical data versus teacher–student paradigm
- Bayesian learning for neural networks
- Breaking the curse of dimensionality with convex neural networks
- Comparing dynamics: deep neural networks versus glassy systems
- Disentangling feature and lazy training in deep neural networks
- Distance-based classification with Lipschitz functions
- Exact theory of dense amorphous hard spheres in high dimension. III: The full replica symmetry breaking solution
- Global minima of overparameterized neural networks
- High-dimensional dynamics of generalization error in neural networks
- Linearized two-layers neural networks in high dimension
- Mean field analysis of neural networks: a central limit theorem
- Mean field analysis of neural networks: a law of large numbers
- On the information bottleneck theory of deep learning
- Reconciling modern machine-learning practice and the classical bias-variance trade-off
- Scaling description of generalization with number of parameters in deep learning
- Surfing on minima of isostatic landscapes: avalanches and unjamming transition
- The simplest model of jamming
- Universality of jamming of nonspherical particles
- Wide neural networks of any depth evolve as linear models under gradient descent *
Cited in
(5)- DANTE: deep alternations for training neural networks
- Revisiting Landscape Analysis in Deep Neural Networks: Eliminating Decreasing Paths to Infinity
- Learning sparse features can lead to overfitting in neural networks
- Relative stability toward diffeomorphisms indicates performance in deep nets*
- Dynamical mean field theory for models of confluent tissues and beyond
This page was built for publication: Landscape and training regimes in deep learning
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2231925)