ONLINE LEARNING WITH MARKOV SAMPLING
From MaRDI portal
Publication:3621441
DOI10.1142/S0219530509001293zbMath1170.68022MaRDI QIDQ3621441
Publication date: 21 April 2009
Published in: Analysis and Applications (Search for Journal in Brave)
Computational learning theory (68Q32) Markov chains (discrete-time Markov processes on discrete state spaces) (60J10) Rate of convergence, degree of approximation (41A25) Probability theory on linear topological spaces (60B11) Morse-Smale systems (37D15)
Related Items (77)
Error analysis of the moving least-squares method with non-identical sampling ⋮ Online regression with unbounded sampling ⋮ Coefficient regularized regression with non-iid sampling ⋮ Convergence rate for the moving least-squares learning with dependent sampling ⋮ Generalized Gramians: Creating frame vectors in maximal subspaces ⋮ ERM learning algorithm for multi-class classification ⋮ The optimal solution of multi-kernel regularization learning ⋮ The consistency of least-square regularized regression with negative association sequence ⋮ Reproducing kernels: harmonic analysis and some of their applications ⋮ Learning rate of distribution regression with dependent samples ⋮ Regularized least square regression with unbounded and dependent sampling ⋮ ERM scheme for quantile regression ⋮ Sharp learning rates of coefficient-based \(l^q\)-regularized regression with indefinite kernels ⋮ Integral operator approach to learning theory with unbounded sampling ⋮ An oracle inequality for regularized risk minimizers with strongly mixing observations ⋮ Weighted random sampling and reconstruction in general multivariate trigonometric polynomial spaces ⋮ Kernel-based online gradient descent using distributed approach ⋮ Kernel Methods for the Approximation of Nonlinear Systems ⋮ Optimal learning rates for least squares regularized regression with unbounded sampling ⋮ Generalization bounds of ERM algorithm with Markov chain samples ⋮ Learning performance of Tikhonov regularization algorithm with geometrically beta-mixing observations ⋮ New Hilbert space tools for analysis of graph Laplacians and Markov processes ⋮ Quantitative convergence analysis of kernel based large-margin unified machines ⋮ Approximation of Lyapunov functions from noisy data ⋮ Approximation analysis of learning algorithms for support vector regression and quantile regression ⋮ Federated learning for minimizing nonsmooth convex loss functions ⋮ An empirical feature-based learning algorithm producing sparse approximations ⋮ Regression learning with non-identically and non-independently sampling ⋮ Sampling and reconstruction of concentrated reproducing kernel signals in mixed Lebesgue spaces ⋮ Learning performance of uncentered kernel-based principal component analysis ⋮ Learning theory viewpoint of approximation by positive linear operators ⋮ Metric duality between positive definite kernels and boundary processes ⋮ Concentration estimates for learning with unbounded sampling ⋮ Generalization bounds of ERM algorithm with \(V\)-geometrically ergodic Markov chains ⋮ LEARNING GRADIENTS FROM NONIDENTICAL DATA ⋮ Online learning for quantile regression and support vector regression ⋮ Analysis of Online Composite Mirror Descent Algorithm ⋮ Convolution random sampling in multiply generated shift-invariant spaces of \(L^p(\mathbb{R}^d)\) ⋮ Random sampling and reconstruction in multiply generated shift-invariant spaces ⋮ Classification with non-i.i.d. sampling ⋮ Learning with varying insensitive loss ⋮ Compressed classification learning with Markov chain samples ⋮ Random sampling in multiply generated shift-invariant subspaces of mixed Lebesgue spaces \(L^{p,q}(\mathbb{R}\times\mathbb{R}^d)\) ⋮ A new comparison theorem on conditional quantiles ⋮ Distributed learning and distribution regression of coefficient regularization ⋮ Convergence rate of kernel canonical correlation analysis ⋮ Generalization performance of least-square regularized regression algorithm with Markov chain samples ⋮ Relevant sampling in finitely generated shift-invariant spaces ⋮ Constructive analysis for coefficient regularization regression algorithms ⋮ Monopoles, dipoles, and harmonic functions on Bratteli diagrams ⋮ Optimal convergence rates of high order Parzen windows with unbounded sampling ⋮ Random sampling in shift invariant spaces ⋮ Logistic classification with varying gaussians ⋮ Approximation analysis of gradient descent algorithm for bipartite ranking ⋮ Learning from non-identical sampling for classification ⋮ Unnamed Item ⋮ Concentration estimates for learning with \(\ell ^{1}\)-regularizer and data dependent hypothesis spaces ⋮ Coefficient-based regression with non-identical unbounded sampling ⋮ Least-square regularized regression with non-iid sampling ⋮ Unnamed Item ⋮ Concentration estimates for the moving least-square method in learning theory ⋮ 𝑊-Markov measures, transfer operators, wavelets and multiresolutions ⋮ Learning Theory of Randomized Sparse Kaczmarz Method ⋮ Spectral Theory for Gaussian Processes: Reproducing Kernels, Boundaries, and L2-Wavelet Generators with Fractional Scales ⋮ Learning rate of magnitude-preserving regularization ranking with dependent samples ⋮ Learning rates of gradient descent algorithm for classification ⋮ Nonuniform sampling, reproducing kernels, and the associated Hilbert spaces ⋮ A note on application of integral operator in learning theory ⋮ Learning from uniformly ergodic Markov chains ⋮ ONLINE REGRESSION WITH VARYING GAUSSIANS AND NON-IDENTICAL DISTRIBUTIONS ⋮ High order Parzen windows and randomized sampling ⋮ Optimal rate for support vector machine regression with Markov chain samples ⋮ Online Classification with Varying Gaussians ⋮ Generalization performance of Gaussian kernels SVMC based on Markov sampling ⋮ Thresholded spectral algorithms for sparse approximations ⋮ CONVERGENCE ANALYSIS OF COEFFICIENT-BASED REGULARIZATION UNDER MOMENT INCREMENTAL CONDITION ⋮ Error analysis of the moving least-squares regression learning algorithm with β-mixing and non-identical sampling
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Model selection for regularized least-squares algorithm in learning theory
- What are SRB measures, and which dynamical systems have them?
- Optimum bounds for the distributions of martingales in Banach spaces
- Regularization networks and support vector machines
- Fully online classification by regularization
- Online learning algorithms
- Shannon sampling. II: Connections to learning theory
- Learning theory estimates via integral operators and their approximations
- On the mathematical foundations of learning
- Learning Theory
- Capacity of reproducing kernel spaces in learning theory
- Online Regularized Classification Algorithms
- ESTIMATING THE APPROXIMATION ERROR IN LEARNING THEORY
- Shannon sampling and function reconstruction from point values
- Online Learning with Kernels
- Differentiable dynamical systems
This page was built for publication: ONLINE LEARNING WITH MARKOV SAMPLING