Regularized least square regression with dependent samples
From MaRDI portal
Publication:849335
DOI10.1007/s10444-008-9099-yzbMath1191.68535OpenAlexW2032882463MaRDI QIDQ849335
Publication date: 25 February 2010
Published in: Advances in Computational Mathematics (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10444-008-9099-y
Learning and adaptive systems in artificial intelligence (68T05) Fourier and Fourier-Stieltjes transforms and other transforms of Fourier type (42B10) Data structures (68P05)
Related Items (26)
Coefficient regularized regression with non-iid sampling ⋮ Convergence rate for the moving least-squares learning with dependent sampling ⋮ Least-squares regularized regression with dependent samples andq-penalty ⋮ Online regularized pairwise learning with non-i.i.d. observations ⋮ Application of integral operator for regularized least-square regression ⋮ An efficient kernel learning algorithm for semisupervised regression problems ⋮ Learning rate of distribution regression with dependent samples ⋮ Regularized least square regression with unbounded and dependent sampling ⋮ Least square regression with indefinite kernels and coefficient regularization ⋮ Spectral algorithms for learning with dependent observations ⋮ Generalization bounds of ERM algorithm with Markov chain samples ⋮ Learning performance of Tikhonov regularization algorithm with geometrically beta-mixing observations ⋮ On the K-functional in learning theory ⋮ Regression learning with non-identically and non-independently sampling ⋮ Learning from regularized regression algorithms with \(p\)-order Markov chain sampling ⋮ Generalization bounds of ERM algorithm with \(V\)-geometrically ergodic Markov chains ⋮ Consistency analysis of spectral regularization algorithms ⋮ Regularized least-squares regression: learning from a sequence ⋮ Fast learning from \(\alpha\)-mixing observations ⋮ Learning Theory Estimates with Observations from General Stationary Stochastic Processes ⋮ Indefinite kernel network with \(l^q\)-norm regularization ⋮ Least-square regularized regression with non-iid sampling ⋮ System identification using kernel-based regularization: new insights on stability and consistency issues ⋮ A note on application of integral operator in learning theory ⋮ Reproducing Kernel Banach Spaces with the ℓ1 Norm II: Error Analysis for Regularized Least Square Regression ⋮ INDEFINITE KERNEL NETWORK WITH DEPENDENT SAMPLING
Cites Work
- Unnamed Item
- Almost sure invariance principles for weakly dependent vector-valued random variables
- Learning and generalisation. With applications to neural networks.
- Regularization networks and support vector machines
- Learning rates of regularized regression for exponentially strongly mixing sequence
- Learning rates of least-square regularized regression
- Shannon sampling. II: Connections to learning theory
- Learning theory estimates via integral operators and their approximations
- Learning Theory
- Mixing properties of harris chains and autoregressive processes
- Minimum complexity regression estimation with weakly dependent observations
- 10.1162/153244302760200704
- Shannon sampling and function reconstruction from point values
- Leave-One-Out Bounds for Kernel Methods
- 10.1162/153244303321897690
- The Invariance Principle for Stationary Processes
- Theory of Reproducing Kernels
This page was built for publication: Regularized least square regression with dependent samples