Pages that link to "Item:Q1745365"
From MaRDI portal
The following pages link to Distributed kernel-based gradient descent algorithms (Q1745365):
Displaying 18 items.
- Nonparametric regression using needlet kernels for spherical data (Q1633627) (← links)
- Distributed regularized least squares with flexible Gaussian kernels (Q2036424) (← links)
- Theory of deep convolutional neural networks. II: Spherical analysis (Q2057723) (← links)
- Distributed learning via filtered hyperinterpolation on manifolds (Q2162123) (← links)
- Distributed kernel gradient descent algorithm for minimum error entropy principle (Q2175022) (← links)
- Theory of deep convolutional neural networks: downsampling (Q2185717) (← links)
- Distributed semi-supervised regression learning with coefficient regularization (Q2668180) (← links)
- Kernel regression, minimax rates and effective dimensionality: Beyond the regular case (Q3298576) (← links)
- Distributed least squares prediction for functional linear regression* (Q5019925) (← links)
- Toward Efficient Ensemble Learning with Structure Constraints: Convergent Algorithms and Applications (Q5060788) (← links)
- Theory of deep convolutional neural networks. III: Approximating radial functions (Q6055154) (← links)
- Approximating smooth and sparse functions by deep neural networks: optimal approximation rates and saturation (Q6062170) (← links)
- Fully corrective gradient boosting with squared hinge: fast learning rates and early stopping (Q6072435) (← links)
- Neural network interpolation operators optimized by Lagrange polynomial (Q6077041) (← links)
- Learning sparse and smooth functions by deep sigmoid nets (Q6109261) (← links)
- Decentralized learning over a network with Nyström approximation using SGD (Q6117024) (← links)
- Value iteration for streaming data on a continuous space with gradient method in an RKHS (Q6488837) (← links)
- Distributed SGD in overparametrized linear regression (Q6496338) (← links)