Learnability of Gaussians with flexible variances
From MaRDI portal
Publication:3174075
zbMATH Open1222.68339MaRDI QIDQ3174075FDOQ3174075
Publication date: 12 October 2011
Full work available at URL: http://www.jmlr.org/papers/v8/ying07a.html
Recommendations
learning theoryGaussian kernelregularization schemeGlivenko-Cantelli classempirical covering numberflexible variances
Classification and discrimination; cluster analysis (statistical aspects) (62H30) Learning and adaptive systems in artificial intelligence (68T05)
Cited In (37)
- Fast learning rate of non-sparse multiple kernel learning and optimal regularization strategies
- Classification with Gaussians and convex loss. II: Improving error bounds by noise conditions
- Title not available (Why is that?)
- Convergence analysis of online algorithms
- Nonlinear approximation using Gaussian kernels
- A Note on Support Vector Machines with Polynomial Kernels
- Some properties of Gaussian reproducing kernel Hilbert spaces and their implications for function approximation and learning theory
- Learning and approximation by Gaussians on Riemannian manifolds
- Error analysis on regularized regression based on the maximum correntropy criterion
- Learnability with respect to fixed distributions
- On extension theorems and their connection to universal consistency in machine learning
- Error bounds for \(l^p\)-norm multiple kernel learning with least square loss
- Multi-kernel regularized classifiers
- Refined Rademacher Chaos Complexity Bounds with Applications to the Multikernel Learning Problem
- Distributed regularized least squares with flexible Gaussian kernels
- Orthogonality from disjoint support in reproducing kernel Hilbert spaces
- Unregularized online algorithms with varying Gaussians
- The optimal solution of multi-kernel regularization learning
- Convergence of online pairwise regression learning with quadratic loss
- Approximation of kernel matrices by circulant matrices and its application in kernel selection methods
- High order Parzen windows and randomized sampling
- Summation of Gaussian shifts as Jacobi's third theta function
- Learning performance of regularized regression with multiscale kernels based on Markov observations
- Conditional quantiles with varying Gaussians
- Learning with sample dependent hypothesis spaces
- Parzen windows for multi-class classification
- Optimal regression rates for SVMs using Gaussian kernels
- Quantitative convergence analysis of kernel based large-margin unified machines
- Error Estimates for Multivariate Regression on Discretized Function Spaces
- Learning the coordinate gradients
- Learning Rates for Classification with Gaussian Kernels
- Maximum correntropy criterion regression models with tending-to-zero scale parameters
- Optimal learning with Gaussians and correntropy loss
- Least square regularized regression for multitask learning
- Online Classification with Varying Gaussians
- Error bounds for learning the kernel
- Learning rates of multi-kernel regularized regression
This page was built for publication: Learnability of Gaussians with flexible variances
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q3174075)