Learning from regularized regression algorithms with \(p\)-order Markov chain sampling
From MaRDI portal
Publication:423185
DOI10.1007/s11766-011-2701-yzbMath1249.68182OpenAlexW2019062291MaRDI QIDQ423185
Jianli Wang, Bao Huai Sheng, Jing Zhang
Publication date: 1 June 2012
Published in: Applied Mathematics. Series B (English Edition) (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1007/s11766-011-2701-y
General nonlinear regression (62J02) Learning and adaptive systems in artificial intelligence (68T05)
Related Items (1)
Cites Work
- Least square regression with indefinite kernels and coefficient regularization
- Regularized least square regression with dependent samples
- Learning from dependent observations
- Learning from uniformly ergodic Markov chains
- Rates of convergence for empirical processes of stationary mixing sequences
- Best choices for regularization parameters in learning theory: on the bias-variance problem.
- The covering number in learning theory
- New approaches to statistical learning theory
- The performance bounds of learning machines based on exponentially strongly mixing sequences
- A Hoeffding-type inequality for ergodic time series
- On the mathematical foundations of learning
- Capacity of reproducing kernel spaces in learning theory
- Minimum complexity regression estimation with weakly dependent observations
- Shannon sampling and function reconstruction from point values
- Unnamed Item
- Unnamed Item
This page was built for publication: Learning from regularized regression algorithms with \(p\)-order Markov chain sampling