SVM-boosting based on Markov resampling: theory and algorithm
From MaRDI portal
Publication:2057733
Applications of Markov chains and discrete-time Markov processes on general state spaces (social mobility, learning theory, industrial processes, etc.) (60J20) Classification and discrimination; cluster analysis (statistical aspects) (62H30) Learning and adaptive systems in artificial intelligence (68T05) Nonparametric statistical resampling methods (62G09)
Recommendations
Cites work
- scientific article; zbMATH DE number 3860199 (Why is no real title available?)
- scientific article; zbMATH DE number 4048925 (Why is no real title available?)
- scientific article; zbMATH DE number 1332320 (Why is no real title available?)
- scientific article; zbMATH DE number 708500 (Why is no real title available?)
- A decision-theoretic generalization of on-line learning and an application to boosting
- A theory of multiclass boosting
- AdaBoost is consistent
- Additive logistic regression: a statistical view of boosting. (With discussion and a rejoinder by the authors)
- Arcing classifiers. (With discussion)
- Bagging predictors
- Best choices for regularization parameters in learning theory: on the bias-variance problem.
- Boosted kernel ridge regression: optimal learning rates and early stopping
- Boosting a weak learning algorithm by majority
- Boosting with early stopping: convergence and consistency
- Convexity, Classification, and Risk Bounds
- Improved boosting algorithms using confidence-rated predictions
- Learning and generalisation. With applications to neural networks.
- Learning rates of least-square regularized regression
- Multiclass boosting: margins, codewords, losses, and algorithms
- On the Bayes-risk consistency of regularized boosting methods.
- On weak base hypotheses and their implications for boosting regression and classification
- Process consistency for AdaBoost.
- Regularization networks and support vector machines
- Statistical behavior and consistency of classification methods based on convex risk minimization.
- Stochastic Relaxation, Gibbs Distributions, and the Bayesian Restoration of Images
- The Generalization Ability of Online Algorithms for Dependent Data
- Theory of Reproducing Kernels
- Weak convergence and empirical processes. With applications to statistics
Cited in
(3)
This page was built for publication: SVM-boosting based on Markov resampling: theory and algorithm
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2057733)