Large margin unified machines with non-i.i.d. process
From MaRDI portal
Publication:6599669
Recommendations
- Comparison theorems on large-margin learning
- Quantitative convergence analysis of kernel based large-margin unified machines
- Classification with non-i.i.d. sampling
- Regression learning with non-identically and non-independently sampling
- Error analysis of classification learning algorithms based on LUMs loss
Cites work
- scientific article; zbMATH DE number 962825 (Why is no real title available?)
- A Statistical Learning Approach to Modal Regression
- Capacity of reproducing kernel spaces in learning theory
- Classification with Gaussians and convex loss
- Classification with non-i.i.d. sampling
- Comparison theorems on large-margin learning
- Conditional quantiles with varying Gaussians
- Distance-Weighted Discrimination
- Distributed minimum error entropy algorithms
- Estimating conditional quantiles with the help of the pinball loss
- Fast rates for support vector machines using Gaussian kernels
- Generalization performance of regularization networks and support vector machines via entropy numbers of compact operators
- Hard or soft classification? Large-margin unified machines
- Lasso guarantees for \(\beta \)-mixing heavy-tailed time series
- Learning Theory
- Learning rates of least-square regularized regression
- Learning rates of regularized regression for exponentially strongly mixing sequence
- Logistic classification with varying gaussians
- Modeling interactive components by coordinate kernel polynomial models
- Multi-kernel regularized classifiers
- ONLINE LEARNING WITH MARKOV SAMPLING
- Quantitative convergence analysis of kernel based large-margin unified machines
- Rates of convergence for empirical processes of stationary mixing sequences
- Regularized least square regression with dependent samples
- Support vector machine soft margin classifiers: error analysis
- Support-vector networks
- The covering number in learning theory
- The elements of statistical learning. Data mining, inference, and prediction
- The sample complexity of pattern classification with neural networks: the size of the weights is more important than the size of the network
This page was built for publication: Large margin unified machines with non-i.i.d. process
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6599669)