Regularized least square regression with unbounded and dependent sampling
From MaRDI portal
Publication:369717
DOI10.1155/2013/139318zbMATH Open1273.62208OpenAlexW1994904367WikidataQ58915348 ScholiaQ58915348MaRDI QIDQ369717FDOQ369717
Authors: Xiaorong Chu, Hongwei Sun
Publication date: 19 September 2013
Published in: Abstract and Applied Analysis (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1155/2013/139318
Recommendations
- Regularized least square regression with dependent samples
- Least-squares regularized regression with dependent samples and \(q\)-penalty
- Coefficient-based regularized regression with dependent and unbounded sampling
- Learning rates of regularized regression for exponentially strongly mixing sequence
- Least-square regularized regression with non-iid sampling
Time series, auto-correlation, regression, etc. in statistics (GARCH) (62M10) Learning and adaptive systems in artificial intelligence (68T05)
Cites Work
- Regularization networks and support vector machines
- Title not available (Why is that?)
- Optimal rates for the regularized least-squares algorithm
- Shannon sampling and function reconstruction from point values
- Shannon sampling. II: Connections to learning theory
- Concentration estimates for learning with unbounded sampling
- Learning rates of least-square regularized regression
- Learning theory estimates via integral operators and their approximations
- Optimal learning rates for least squares regularized regression with unbounded sampling
- Least square regression with indefinite kernels and coefficient regularization
- Spectral Algorithms for Supervised Learning
- Mixing properties of harris chains and autoregressive processes
- On regularization algorithms in learning theory
- Application of integral operator for regularized least-square regression
- A note on application of integral operator in learning theory
- ERM learning with unbounded sampling
- Half supervised coefficient regularization for regression learning with unbounded sampling
- ONLINE LEARNING WITH MARKOV SAMPLING
- Integral operator approach to learning theory with unbounded sampling
- Minimum complexity regression estimation with weakly dependent observations
- Regularized least square regression with dependent samples
Cited In (15)
- Convergence rate of SVM for kernel-based robust regression
- Convergence rate for the moving least-squares learning with dependent sampling
- Least-square regularized regression with non-iid sampling
- Regularized least-squares regression: learning from a sequence
- Least-squares regularized regression with dependent samples and \(q\)-penalty
- Support vector machines regression with unbounded sampling
- Regularized semi-supervised least squares regression with dependent samples
- Optimal learning rates for least squares regularized regression with unbounded sampling
- Analysis of regression algorithms with unbounded sampling
- The consistency of least-square regularized regression with negative association sequence
- Coefficient-based regularized regression with dependent and unbounded sampling
- Online regularized pairwise learning with non-i.i.d. observations
- Regression learning with non-identically and non-independently sampling
- Regularized least square regression with dependent samples
- Fast learning from \(\alpha\)-mixing observations
This page was built for publication: Regularized least square regression with unbounded and dependent sampling
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q369717)