Pages that link to "Item:Q2182898"
From MaRDI portal
The following pages link to Optimal approximation of piecewise smooth functions using deep ReLU neural networks (Q2182898):
Displayed 50 items.
- Deep learning observables in computational fluid dynamics (Q777521) (← links)
- Solving high-dimensional Hamilton-Jacobi-Bellman PDEs using neural networks: perspectives from the theory of controlled diffusions and measures on path space (Q825596) (← links)
- Applied harmonic analysis and data processing. Abstracts from the workshop held March 25--31, 2018 (Q1731982) (← links)
- Asymptotic expansion for neural network operators of the Kantorovich type and high order of approximation (Q2023320) (← links)
- Efficient approximation of solutions of parametric linear transport equations by ReLU DNNs (Q2026114) (← links)
- Topological properties of the set of functions generated by neural networks of fixed size (Q2031060) (← links)
- Numerical solution of the parametric diffusion equation by deep neural networks (Q2049099) (← links)
- Error bounds for deep ReLU networks using the Kolmogorov-Arnold superposition theorem (Q2055036) (← links)
- Fast generalization error bound of deep learning without scale invariance of activation functions (Q2055056) (← links)
- Approximation rates for neural networks with encodable weights in smoothness spaces (Q2055067) (← links)
- Theory of deep convolutional neural networks. II: Spherical analysis (Q2057723) (← links)
- Optimal approximation rate of ReLU networks in terms of width and depth (Q2065073) (← links)
- Constructive deep ReLU neural network approximation (Q2067309) (← links)
- Sparsest piecewise-linear regression of one-dimensional data (Q2074905) (← links)
- Discontinuous neural networks and discontinuity learning (Q2088828) (← links)
- On the approximation of rough functions with deep neural networks (Q2089012) (← links)
- Adaptive machine learning-based surrogate modeling to accelerate PDE-constrained optimization in enhanced oil recovery (Q2095535) (← links)
- CAS4DL: Christoffel adaptive sampling for function approximation via deep learning (Q2098302) (← links)
- Stable recovery of entangled weights: towards robust identification of deep neural networks from minimal samples (Q2105108) (← links)
- A measure theoretical approach to the mean-field maximum principle for training NeurODEs (Q2105521) (← links)
- DNN expression rate analysis of high-dimensional PDEs: application to option pricing (Q2117328) (← links)
- A theoretical analysis of deep neural networks and parametric PDEs (Q2117329) (← links)
- Neural network identifiability for a family of sigmoidal nonlinearities (Q2117333) (← links)
- Approximation spaces of deep neural networks (Q2117336) (← links)
- Exponential ReLU DNN expression of holomorphic maps in high dimension (Q2117341) (← links)
- SelectNet: self-paced learning for high-dimensional partial differential equations (Q2131038) (← links)
- Multivariate neural network interpolation operators (Q2151614) (← links)
- Solving eigenvalue PDEs of metastable diffusion processes using artificial neural networks (Q2157080) (← links)
- Nonlinear approximation via compositions (Q2185653) (← links)
- On the minimax optimality and superiority of deep neural network learning over sparse parameter spaces (Q2185697) (← links)
- Theory of deep convolutional neural networks: downsampling (Q2185717) (← links)
- Affine symmetries and neural network identifiability (Q2214101) (← links)
- Deep ReLU network expression rates for option prices in high-dimensional, exponential Lévy models (Q2238770) (← links)
- Universality of deep convolutional neural networks (Q2300759) (← links)
- Machine learning approximation algorithms for high-dimensional fully nonlinear partial differential equations and second-order backward stochastic differential equations (Q2327815) (← links)
- Space-time error estimates for deep neural network approximations for differential equations (Q2683168) (← links)
- A convenient infinite dimensional framework for generative adversarial learning (Q2683193) (← links)
- Phase transitions in rate distortion theory and deep learning (Q2684466) (← links)
- The universal approximation theorem for complex-valued neural networks (Q2689134) (← links)
- Computation and learning in high dimensions. Abstracts from the workshop held August 1--7, 2021 (hybrid meeting) (Q2693017) (← links)
- Approximation properties of residual neural networks for Kolmogorov PDEs (Q2697245) (← links)
- Learning elliptic partial differential equations with randomized linear algebra (Q2697403) (← links)
- The Gap between Theory and Practice in Function Approximation with Deep Neural Networks (Q4999396) (← links)
- New Function Spaces Associated to Representations of Nilpotent Lie Groups and Generalized Time-Frequency Analysis (Q5001466) (← links)
- Deep Network With Approximation Error Being Reciprocal of Width to Power of Square Root of Depth (Q5004339) (← links)
- Approximation of Smoothness Classes by Deep Rectifier Networks (Q5020751) (← links)
- Optimal Approximation with Sparsely Connected Deep Neural Networks (Q5025773) (← links)
- New Error Bounds for Deep ReLU Networks Using Sparse Grids (Q5025775) (← links)
- Analysis of the Generalization Error: Empirical Risk Minimization over Deep Artificial Neural Networks Overcomes the Curse of Dimensionality in the Numerical Approximation of Black--Scholes Partial Differential Equations (Q5037569) (← links)
- (Q5053184) (← links)