Pages that link to "Item:Q2835988"
From MaRDI portal
The following pages link to Deep vs. shallow networks: An approximation theory perspective (Q2835988):
Displaying 50 items.
- Why does deep and cheap learning work so well? (Q1676557) (← links)
- Saturation classes for MAX-product neural network operators activated by sigmoidal functions (Q1682591) (← links)
- Applied harmonic analysis and data processing. Abstracts from the workshop held March 25--31, 2018 (Q1731982) (← links)
- Function approximation with zonal function networks with activation functions analogous to the rectified linear unit functions (Q1734693) (← links)
- Estimates for the neural network operators of the max-product type with continuous and \(p\)-integrable functions (Q1743217) (← links)
- Asymptotic expansion for neural network operators of the Kantorovich type and high order of approximation (Q2023320) (← links)
- Deep autoencoder based energy method for the bending, vibration, and buckling analysis of Kirchhoff plates with transfer learning (Q2035195) (← links)
- On the rate of convergence of fully connected deep neural network regression estimates (Q2054491) (← links)
- A direct approach for function approximation on data defined manifolds (Q2057766) (← links)
- An analytic layer-wise deep learning framework with applications to robotics (Q2059394) (← links)
- Optimal adaptive control of partially uncertain linear continuous-time systems with state delay (Q2094036) (← links)
- Stable recovery of entangled weights: towards robust identification of deep neural networks from minimal samples (Q2105108) (← links)
- A measure theoretical approach to the mean-field maximum principle for training NeurODEs (Q2105521) (← links)
- Error bounds for ReLU networks with depth and width parameters (Q2111556) (← links)
- Nonlinear approximation and (deep) ReLU networks (Q2117331) (← links)
- Approximation spaces of deep neural networks (Q2117336) (← links)
- On sharpness of an error bound for deep ReLU network approximation (Q2143606) (← links)
- Voronovskaja type theorems and high-order convergence neural network operators with sigmoidal functions (Q2178844) (← links)
- On the approximation by single hidden layer feedforward neural networks with fixed weights (Q2179313) (← links)
- An analysis of training and generalization errors in shallow and deep networks (Q2185668) (← links)
- Universal approximation with quadratic deep networks (Q2185719) (← links)
- Function approximation by deep networks (Q2191837) (← links)
- Topology optimization based on deep representation learning (DRL) for compliance and stress-constrained design (Q2205158) (← links)
- Nonparametric regression using deep neural networks with ReLU activation function (Q2215715) (← links)
- Convergence of the deep BSDE method for coupled FBSDEs (Q2223111) (← links)
- A linear relation between input and first layer in neural networks (Q2294577) (← links)
- Universality of deep convolutional neural networks (Q2300759) (← links)
- On deep learning as a remedy for the curse of dimensionality in nonparametric regression (Q2313286) (← links)
- Application of deep learning neural network to identify collision load conditions based on permanent plastic deformation of shell structures (Q2319402) (← links)
- Super-resolution meets machine learning: approximation of measures (Q2338563) (← links)
- Estimation of a regression function on a manifold by fully connected deep neural networks (Q2676904) (← links)
- Local approximation of operators (Q2689140) (← links)
- Mini-workshop: Analysis of data-driven optimal control. Abstracts from the mini-workshop held May 9--15, 2021 (hybrid meeting) (Q2693004) (← links)
- Deep distributed convolutional neural networks: Universality (Q4560301) (← links)
- Deep learning in high dimension: Neural network expression rates for generalized polynomial chaos expansions in UQ (Q4615657) (← links)
- Generalization Error of Minimum Weighted Norm and Kernel Interpolation (Q4999364) (← links)
- Optimal Approximation with Sparsely Connected Deep Neural Networks (Q5025773) (← links)
- (Q5053261) (← links)
- High-Dimensional Learning Under Approximate Sparsity with Applications to Nonsmooth Estimation and Regularized Neural Networks (Q5060495) (← links)
- Feedforward Neural Networks and Compositional Functions with Applications to Dynamical Systems (Q5065061) (← links)
- Theoretical issues in deep networks (Q5073211) (← links)
- Full error analysis for the training of deep neural networks (Q5083408) (← links)
- Rectified deep neural networks overcome the curse of dimensionality for nonsmooth value functions in zero-sum games of nonlinear stiff systems (Q5132232) (← links)
- Butterfly-Net: Optimal Function Representation Based on Convolutional Neural Networks (Q5162362) (← links)
- Quantitative estimates involving <i>K</i>-functionals for neural network-type operators (Q5197961) (← links)
- Deep neural networks for rotation-invariance approximation and learning (Q5236745) (← links)
- Robust randomized optimization with k nearest neighbors (Q5236747) (← links)
- Approximating functions with multi-features by deep convolutional neural networks (Q5873927) (← links)
- Low-rank approximation of continuous functions in Sobolev spaces with dominating mixed smoothness (Q5886873) (← links)
- A Proof that Artificial Neural Networks Overcome the Curse of Dimensionality in the Numerical Approximation of Black–Scholes Partial Differential Equations (Q5889064) (← links)