Homotopy relaxation training algorithms for infinite-width two-layer ReLU neural networks
From MaRDI portal
Publication:6665313
Recommendations
Cites work
- scientific article; zbMATH DE number 7370588 (Why is no real title available?)
- A homotopy method for parameter estimation of nonlinear differential equations with multiple optima
- A homotopy training algorithm for fully connected neural networks
- Accelerated optimization with orthogonality constraints
- Adaptive activation functions accelerate convergence in deep and physics-informed neural networks
- An adaptive homotopy tracking algorithm for solving nonlinear parametric systems with applications in nonlinear ODEs
- Computing all solutions to polynomial systems using homotopy continuation
- Greedy training algorithms for neural networks and applications to PDEs
- High-dimensional probability. An introduction with applications in data science
- Locally adaptive activation functions with slope recovery for deep and physics-informed neural networks
- Neural tangent kernel: convergence and generalization in neural networks (invited paper)
- Scaling Limit of the Stein Variational Gradient Descent: The Mean Field Regime
- Side effects of learning from low-dimensional data embedded in a Euclidean space
- Sobolev training of thermodynamic-informed neural networks for interpretable elasto-plasticity models with level set hardening
- The Numerical Solution of Systems of Polynomials Arising in Engineering and Science
- The deep Ritz method: a deep learning-based numerical algorithm for solving variational problems
- Why does unsupervised pre-training help deep learning?
Cited in
(1)
This page was built for publication: Homotopy relaxation training algorithms for infinite-width two-layer ReLU neural networks
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6665313)