Learning with centered reproducing kernels
From MaRDI portal
Publication:6496339
DOI10.1142/S0219530523400018MaRDI QIDQ6496339FDOQ6496339
Authors: Chendi Wang, Xin Guo, Qiang Wu
Publication date: 3 May 2024
Published in: Analysis and Applications (Singapore) (Search for Journal in Brave)
Recommendations
Nonparametric regression and quantile regression (62G08) Hilbert spaces with reproducing kernels (= (proper) functional Hilbert spaces, including de Branges-Rovnyak and other structured spaces) (46E22) Computer science (68-XX)
Cites Work
- Title not available (Why is that?)
- Optimal rates for the regularized least-squares algorithm
- Support vector machine soft margin classifiers: error analysis
- Optimal designs of positive definite kernels for scattered data approximation
- Some properties of Gaussian reproducing kernel Hilbert spaces and their implications for function approximation and learning theory
- Shannon sampling. II: Connections to learning theory
- Learning theory estimates via integral operators and their approximations
- Bases for conditionally positive definite kernels
- Centered kernel alignment enhancing neural network pretraining for MRI-based dementia diagnosis
- Algorithms for learning kernels based on centered alignment
- Analysis of support vector machine classification
- Training SVMs without offset
- Distributed learning with regularized least squares
- Universality of deep convolutional neural networks
- Learning theory of distributed regression with bias corrected regularization kernel network
- Learning theory of distributed spectral algorithms
- Distributed semi-supervised learning with kernel ridge regression
- Kernel sliced inverse regression: regularization and consistency
- Multiple kernel clustering based on centered kernel alignment
- Reproducing kernels of Sobolev–Slobodeckij˘ spaces via Green’s kernel approach: Theory and applications
- Semi-supervised learning with summary statistics
- Approximating functions with multi-features by deep convolutional neural networks
- Operator-valued positive definite kernels and differentiable universality
- Deep neural networks can stably solve high-dimensional, noisy, non-linear inverse problems
- Title not available (Why is that?)
- Neural tangent kernel: convergence and generalization in neural networks (invited paper)
- Convergence and finite sample approximations of entropic regularized Wasserstein distances in Gaussian and RKHS settings
This page was built for publication: Learning with centered reproducing kernels
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6496339)