Mathematical Research Data Initiative
Main page
Recent changes
Random page
SPARQL
MaRDI@GitHub
New item
Special pages
In other projects
MaRDI portal item
Discussion
View source
View history
English
Log in

Stronger convergence results for deep residual networks: network width scales linearly with training data size

From MaRDI portal
Publication:5095259
Jump to:navigation, search

DOI10.1093/IMAIAI/IAAA030OpenAlexW3108510223MaRDI QIDQ5095259FDOQ5095259


Authors: T. C. Gülcü Edit this on Wikidata


Publication date: 5 August 2022

Published in: Information and Inference: A Journal of the IMA (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/1911.04351





zbMATH Keywords

activation functionneural tangent kerneldeep network optimizationdeep residual networks


Mathematics Subject Classification ID

Artificial intelligence (68Txx)



Cited In (2)

  • Continuous limits of residual neural networks in case of large input data
  • Globally Convergent Multilevel Training of Deep Residual Networks





This page was built for publication: Stronger convergence results for deep residual networks: network width scales linearly with training data size

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5095259)

Retrieved from "https://portal.mardi4nfdi.de/w/index.php?title=Publication:5095259&oldid=19605413"
Tools
What links here
Related changes
Printable version
Permanent link
Page information
This page was last edited on 8 February 2024, at 12:56. Warning: Page may not contain recent updates.
Privacy policy
About MaRDI portal
Disclaimers
Imprint
Powered by MediaWiki