Neural tangent kernel: convergence and generalization in neural networks (invited paper)
From MaRDI portal
Publication:6086982
DOI10.1145/3406325.3465355arXiv1806.07572MaRDI QIDQ6086982
Franck Gabriel, A. Jacot, Clément Hongler
Publication date: 14 November 2023
Published in: Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/1806.07572
Related Items (8)
Convergence rates for shallow neural networks learned by gradient descent ⋮ On the spectral bias of coupled frequency predictor-corrector triangular DNN: the convergence analysis ⋮ Benign Overfitting and Noisy Features ⋮ -Stable convergence of heavy-/light-tailed infinitely wide neural networks ⋮ Learning physical models that can respect conservation laws ⋮ Stochastic gradient descent: where optimization meets machine learning ⋮ Learning with centered reproducing kernels ⋮ Unnamed Item
This page was built for publication: Neural tangent kernel: convergence and generalization in neural networks (invited paper)