Classification with Gaussians and convex loss
zbMATH Open1235.68207MaRDI QIDQ2880932FDOQ2880932
Authors: Dao-Hong Xiang, Ding-Xuan Zhou
Publication date: 17 April 2012
Published in: Journal of Machine Learning Research (JMLR) (Search for Journal in Brave)
Full work available at URL: http://www.jmlr.org/papers/v10/xiang09a.html
Recommendations
- Classification with Gaussians and convex loss. II: Improving error bounds by noise conditions
- Convexity, Classification, and Risk Bounds
- Learning with convex loss and indefinite kernels
- scientific article; zbMATH DE number 1140602
- Optimal learning with Gaussians and correntropy loss
- Analysis to Neyman-Pearson classification with convex loss function
- High-dimensional regression and classification under a class of convex loss functions
- Convex calibration dimension for multiclass loss matrices
- Convex surrogate minimization in classification
- On the rate of convergence for multi-category classification based on convex losses
reproducing kernel Hilbert spacebinary classificationapproximationcovering numbergeneral convex lossvarying Gaussian kernels
Classification and discrimination; cluster analysis (statistical aspects) (62H30) Learning and adaptive systems in artificial intelligence (68T05)
Cited In (30)
- Logistic classification with varying gaussians
- Learning rates for the risk of kernel-based quantile regression estimators in additive models
- Calibration of \(\epsilon\)-insensitive loss in support vector machines regression
- Analysis to Neyman-Pearson classification with convex loss function
- Classification with Gaussians and convex loss. II: Improving error bounds by noise conditions
- Online classification with varying Gaussians
- Covering numbers of Gaussian reproducing kernel Hilbert spaces
- Learning with Convex Loss and Indefinite Kernels
- Employing different loss functions for the classification of images via supervised learning
- Binning in Gaussian kernel regularization
- Large margin unified machines with non-i.i.d. process
- A Note on Support Vector Machines with Polynomial Kernels
- Quantile regression with \(\ell_1\)-regularization and Gaussian kernels
- Multi-kernel regularized classifiers
- Learning rates of regression with \(q\)-norm loss and threshold
- Distributed regularized least squares with flexible Gaussian kernels
- Unregularized online algorithms with varying Gaussians
- Comparison theorems on large-margin learning
- Learning rates of kernel-based robust classification
- Convergence analysis for complementary-label learning with kernel ridge regression
- Conditional quantiles with varying Gaussians
- Learnability of Gaussians with flexible variances
- Learning with sample dependent hypothesis spaces
- Optimal regression rates for SVMs using Gaussian kernels
- A new comparison theorem on conditional quantiles
- An oracle inequality for regularized risk minimizers with strongly mixing observations
- Learning Rates for Classification with Gaussian Kernels
- Learning from non-identical sampling for classification
- A STUDY ON THE ERROR OF DISTRIBUTED ALGORITHMS FOR BIG DATA CLASSIFICATION WITH SVM
- Optimal learning with Gaussians and correntropy loss
This page was built for publication: Classification with Gaussians and convex loss
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2880932)