Pages that link to "Item:Q2300759"
From MaRDI portal
The following pages link to Universality of deep convolutional neural networks (Q2300759):
Displaying 50 items.
- Learning under \((1 + \epsilon)\)-moment conditions (Q778021) (← links)
- Distributed regularized least squares with flexible Gaussian kernels (Q2036424) (← links)
- Theory of deep convolutional neural networks. II: Spherical analysis (Q2057723) (← links)
- Rates of approximation by neural network interpolation operators (Q2073064) (← links)
- Stochastic Markov gradient descent and training low-bit neural networks (Q2073135) (← links)
- The construction and approximation of ReLU neural network operators (Q2086452) (← links)
- On the rate of convergence of image classifiers based on convolutional neural networks (Q2087403) (← links)
- Convolutional spectral kernel learning with generalization guarantees (Q2093403) (← links)
- Approximation of functions from korobov spaces by deep convolutional neural networks (Q2108977) (← links)
- Error bounds for ReLU networks with depth and width parameters (Q2111556) (← links)
- Analysis of convolutional neural network image classifiers in a hierarchical max-pooling model with additional local pooling (Q2112261) (← links)
- Learning time-dependent PDEs with a linear and nonlinear separate convolutional neural network (Q2135244) (← links)
- Moreau envelope augmented Lagrangian method for nonconvex optimization with linear constraints (Q2148118) (← links)
- Approximation properties of deep ReLU CNNs (Q2157922) (← links)
- Rates of convergence of randomized Kaczmarz algorithms in Hilbert spaces (Q2168686) (← links)
- Theory of deep convolutional neural networks: downsampling (Q2185717) (← links)
- Learning rates for the kernel regularized regression with a differentiable strongly convex loss (Q2191832) (← links)
- Approximation by multivariate max-product Kantorovich-type operators and learning rates of least-squares regularized regression (Q2191850) (← links)
- Distributed semi-supervised regression learning with coefficient regularization (Q2668180) (← links)
- Quantitative estimates for neural network operators implied by the asymptotic behaviour of the sigmoidal activation functions (Q2675943) (← links)
- The universal approximation theorem for complex-valued neural networks (Q2689134) (← links)
- Deep learning for inverse problems with unknown operator (Q2689599) (← links)
- Computation and learning in high dimensions. Abstracts from the workshop held August 1--7, 2021 (hybrid meeting) (Q2693017) (← links)
- (Q4969157) (← links)
- Neural ODEs as the deep limit of ResNets with constant weights (Q4995042) (← links)
- Balanced joint maximum mean discrepancy for deep transfer learning (Q4995048) (← links)
- Deep Network With Approximation Error Being Reciprocal of Width to Power of Square Root of Depth (Q5004339) (← links)
- Learnable Empirical Mode Decomposition based on Mathematical Morphology (Q5024378) (← links)
- Effects of depth, width, and initialization: A convergence analysis of layer-wise training for deep linear neural networks (Q5037872) (← links)
- Generalization Error Analysis of Neural Networks with Gradient Based Regularization (Q5045671) (← links)
- DENSITY RESULTS BY DEEP NEURAL NETWORK OPERATORS WITH INTEGER WEIGHTS (Q5052610) (← links)
- (Q5053184) (← links)
- Neural network interpolation operators activated by smooth ramp functions (Q5083442) (← links)
- Weighted random sampling and reconstruction in general multivariate trigonometric polynomial spaces (Q5089731) (← links)
- A note on the applications of one primary function in deep neural networks (Q5097859) (← links)
- Deep Neural Networks and PIDE Discretizations (Q5100094) (← links)
- On the K-functional in learning theory (Q5107666) (← links)
- Deep Network Approximation for Smooth Functions (Q5155613) (← links)
- Butterfly-Net: Optimal Function Representation Based on Convolutional Neural Networks (Q5162362) (← links)
- Online regularized pairwise learning with least squares loss (Q5220066) (← links)
- Deep neural networks for rotation-invariance approximation and learning (Q5236745) (← links)
- Distributed Filtered Hyperinterpolation for Noisy Data on the Sphere (Q5855635) (← links)
- Learning rates for partially linear support vector machine in high dimensions (Q5856267) (← links)
- Approximating functions with multi-features by deep convolutional neural networks (Q5873927) (← links)
- Spline representation and redundancies of one-dimensional ReLU neural network models (Q5873929) (← links)
- Training a Neural-Network-Based Surrogate Model for Aerodynamic Optimisation Using a Gaussian Process (Q5880409) (← links)
- A deep network construction that adapts to intrinsic dimensionality beyond the domain (Q6054952) (← links)
- Theory of deep convolutional neural networks. III: Approximating radial functions (Q6055154) (← links)
- Neural network approximation of continuous functions in high dimensions with applications to inverse problems (Q6056231) (← links)
- Approximating smooth and sparse functions by deep neural networks: optimal approximation rates and saturation (Q6062170) (← links)