Least-squares regularized regression with dependent samples andq-penalty
From MaRDI portal
Publication:2903163
DOI10.1080/00036811.2011.559465zbMath1271.68203OpenAlexW1985951614WikidataQ58179031 ScholiaQ58179031MaRDI QIDQ2903163
Publication date: 23 August 2012
Published in: Applicable Analysis (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1080/00036811.2011.559465
General nonlinear regression (62J02) Learning and adaptive systems in artificial intelligence (68T05)
Related Items (5)
Fast learning from \(\alpha\)-mixing observations ⋮ Learning Theory Estimates with Observations from General Stationary Stochastic Processes ⋮ Unified approach to coefficient-based regularized regression ⋮ Constructive analysis for least squares regression with generalized \(K\)-norm regularization ⋮ Constructive analysis for coefficient regularization regression algorithms
Cites Work
- Model selection for regularized least-squares algorithm in learning theory
- Regularization in kernel learning
- Regularized least square regression with dependent samples
- Rates of convergence for empirical processes of stationary mixing sequences
- On consistency in nonparametric estimation under mixing conditions.
- Nonparametric time series prediction through adaptive model selection
- The covering number in learning theory
- Learning rates of regularized regression for exponentially strongly mixing sequence
- Learning rates of least-square regularized regression
- Learning theory estimates via integral operators and their approximations
- Learning Theory
- Minimum complexity regression estimation with weakly dependent observations
- ESTIMATING THE APPROXIMATION ERROR IN LEARNING THEORY
- Leave-One-Out Bounds for Kernel Methods
This page was built for publication: Least-squares regularized regression with dependent samples andq-penalty