Pages that link to "Item:Q5162356"
From MaRDI portal
The following pages link to Dying ReLU and Initialization: Theory and Numerical Examples (Q5162356):
Displaying 10 items.
- A comprehensive and fair comparison of two neural operators (with practical extensions) based on FAIR data (Q2138799) (← links)
- A proof of convergence for gradient descent in the training of artificial neural networks for constant target functions (Q2145074) (← links)
- A proof of convergence for stochastic gradient descent in the training of artificial neural networks with ReLU activation for constant target functions (Q2167333) (← links)
- Training thinner and deeper neural networks: jumpstart regularization (Q2170213) (← links)
- A weight initialization based on the linear product structure for neural networks (Q2247166) (← links)
- Convex and concave envelopes of artificial neural network activation functions for deterministic global optimization (Q2689856) (← links)
- Artificial-neural-network-based nonlinear algebraic models for large-eddy simulation of compressible wall-bounded turbulence (Q5886405) (← links)
- Estimating permeability of 3D micro-CT images by physics-informed CNNs based on DNS (Q6106108) (← links)
- Learning Specialized Activation Functions for Physics-Informed Neural Networks (Q6143615) (← links)
- Lifelong deep learning-based control of robot manipulators (Q6493615) (← links)