Regularized least square regression with dependent samples
From MaRDI portal
Publication:849335
DOI10.1007/S10444-008-9099-YzbMATH Open1191.68535OpenAlexW2032882463MaRDI QIDQ849335FDOQ849335
Publication date: 25 February 2010
Published in: Advances in Computational Mathematics (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s10444-008-9099-y
Recommendations
- Regularized least square regression with unbounded and dependent sampling
- Least-squares regularized regression with dependent samples and \(q\)-penalty
- Regularized semi-supervised least squares regression with dependent samples
- Least-square regularized regression with non-iid sampling
- Coefficient-based regularized regression with dependent and unbounded sampling
- Least square regression with \(l^{p}\)-coefficient regularization
- Least square regression with coefficient regularization by gradient descent
- Partial least squares with a regularized weight
- Coefficient regularized regression with non-iid sampling
- Least-square estimation for regression on random designs for absolutely regular observations
Learning and adaptive systems in artificial intelligence (68T05) Data structures (68P05) Fourier and Fourier-Stieltjes transforms and other transforms of Fourier type (42B10)
Cites Work
- Regularization networks and support vector machines
- Title not available (Why is that?)
- Theory of Reproducing Kernels
- Learning Theory
- 10.1162/153244302760200704
- The Invariance Principle for Stationary Processes
- 10.1162/153244303321897690
- Shannon sampling and function reconstruction from point values
- Leave-One-Out Bounds for Kernel Methods
- Shannon sampling. II: Connections to learning theory
- Learning rates of least-square regularized regression
- Learning theory estimates via integral operators and their approximations
- Almost sure invariance principles for weakly dependent vector-valued random variables
- Mixing properties of harris chains and autoregressive processes
- Minimum complexity regression estimation with weakly dependent observations
- Learning and generalisation. With applications to neural networks.
- Learning rates of regularized regression for exponentially strongly mixing sequence
Cited In (29)
- Convergence rate for the moving least-squares learning with dependent sampling
- Indefinite kernel network with \(l^q\)-norm regularization
- Generalization bounds of ERM algorithm with Markov chain samples
- Generalization bounds of ERM algorithm with \(V\)-geometrically ergodic Markov chains
- On the K-functional in learning theory
- Least-square regularized regression with non-iid sampling
- An efficient kernel learning algorithm for semisupervised regression problems
- Learning from regularized regression algorithms with \(p\)-order Markov chain sampling
- Analysis of regularized least squares ranking with centered reproducing kernel
- Regularized least-squares regression: learning from a sequence
- Consistency analysis of spectral regularization algorithms
- Large margin unified machines with non-i.i.d. process
- Spectral algorithms for learning with dependent observations
- Regularized semi-supervised least squares regression with dependent samples
- Least square regression with indefinite kernels and coefficient regularization
- Indefinite kernel network with dependent sampling
- System identification using kernel-based regularization: new insights on stability and consistency issues
- Coefficient regularized regression with non-iid sampling
- Reproducing Kernel Banach Spaces with the ℓ1 Norm II: Error Analysis for Regularized Least Square Regression
- Learning performance of Tikhonov regularization algorithm with geometrically beta-mixing observations
- Least-squares regularized regression with dependent samples andq-penalty
- Learning rate of distribution regression with dependent samples
- Learning Theory Estimates with Observations from General Stationary Stochastic Processes
- Online regularized pairwise learning with non-i.i.d. observations
- Regression learning with non-identically and non-independently sampling
- Regularized least square regression with unbounded and dependent sampling
- Application of integral operator for regularized least-square regression
- Fast learning from \(\alpha\)-mixing observations
- A note on application of integral operator in learning theory
This page was built for publication: Regularized least square regression with dependent samples
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q849335)