Fully online classification by regularization
From MaRDI portal
Publication:2381648
DOI10.1016/j.acha.2006.12.001zbMath1124.68099OpenAlexW1977073705MaRDI QIDQ2381648
Publication date: 18 September 2007
Published in: Applied and Computational Harmonic Analysis (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.acha.2006.12.001
regularizationerror analysisreproducing kernel Hilbert spacesonline learningclassification algorithm
Related Items (18)
LEAST SQUARE REGRESSION WITH COEFFICIENT REGULARIZATION BY GRADIENT DESCENT ⋮ Generalization properties of doubly stochastic learning algorithms ⋮ Distributed regression learning with coefficient regularization ⋮ Convergence of online pairwise regression learning with quadratic loss ⋮ Convergence analysis of online learning algorithm with two-stage step size ⋮ Online learning for quantile regression and support vector regression ⋮ Unregularized online learning algorithms with general loss functions ⋮ Learning and approximation by Gaussians on Riemannian manifolds ⋮ Learning gradients by a gradient descent algorithm ⋮ Logistic classification with varying gaussians ⋮ Unnamed Item ⋮ Unregularized online algorithms with varying Gaussians ⋮ ONLINE LEARNING WITH MARKOV SAMPLING ⋮ Learning rates of gradient descent algorithm for classification ⋮ Moving quantile regression ⋮ SVM LEARNING AND Lp APPROXIMATION BY GAUSSIANS ON RIEMANNIAN MANIFOLDS ⋮ ONLINE REGRESSION WITH VARYING GAUSSIANS AND NON-IDENTICAL DISTRIBUTIONS ⋮ Online Classification with Varying Gaussians
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Model selection for regularized least-squares algorithm in learning theory
- Multi-kernel regularized classifiers
- Relative expected instantaneous loss bounds
- Support vector machines are universally consistent
- On the Bayes-risk consistency of regularized boosting methods.
- Statistical behavior and consistency of classification methods based on convex risk minimization.
- Regularization networks and support vector machines
- Online learning algorithms
- Shannon sampling. II: Connections to learning theory
- Learning theory estimates via integral operators and their approximations
- On the Generalization Ability of On-Line Learning Algorithms
- Online Regularized Classification Algorithms
- ESTIMATING THE APPROXIMATION ERROR IN LEARNING THEORY
- Shannon sampling and function reconstruction from point values
- Online Learning with Kernels
- Learning Theory
- STABILITY RESULTS IN LEARNING THEORY
- Convexity, Classification, and Risk Bounds
- Theory of Reproducing Kernels
This page was built for publication: Fully online classification by regularization