Pages that link to "Item:Q5027015"
From MaRDI portal
The following pages link to Layer-Parallel Training of Deep Residual Neural Networks (Q5027015):
Displaying 15 items.
- Multigrid reduction in time with Richardson extrapolation (Q2033684) (← links)
- Quantized convolutional neural networks through the lens of partial differential equations (Q2079526) (← links)
- AutoMat: automatic differentiation for generalized standard materials on GPUs (Q2115597) (← links)
- Long-time integration of parametric evolution equations with physics-informed DeepONets (Q2683074) (← links)
- Structure-preserving deep learning (Q5014474) (← links)
- A Unified Analysis Framework for Iterative Parallel-in-Time Algorithms (Q6054282) (← links)
- Multilevel Objective-Function-Free Optimization with an Application to Neural Networks Training (Q6076869) (← links)
- Semi-implicit back propagation (Q6085636) (← links)
- Globally Convergent Multilevel Training of Deep Residual Networks (Q6108152) (← links)
- MGIC: Multigrid-in-Channels Neural Network Architectures (Q6108155) (← links)
- Connections between numerical algorithms for PDEs and neural networks (Q6156049) (← links)
- Efficient multigrid reduction-in-time for method-of-lines discretizations of linear advection (Q6159242) (← links)
- Parareal with a learned coarse model for robotic manipulation (Q6163817) (← links)
- Applications of time parallelization (Q6163822) (← links)
- A space-time parallel algorithm with adaptive mesh refinement for computational fluid dynamics (Q6163824) (← links)