Learning rates of regularized regression for exponentially strongly mixing sequence
From MaRDI portal
Publication:2427169
DOI10.1016/j.jspi.2007.09.003zbMath1134.62050OpenAlexW1996700041MaRDI QIDQ2427169
Publication date: 8 May 2008
Published in: Journal of Statistical Planning and Inference (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.jspi.2007.09.003
Linear inference, regression (62J99) Learning and adaptive systems in artificial intelligence (68T05) Applications of functional analysis in probability theory and statistics (46N30)
Related Items
Generalization performance of Lagrangian support vector machine based on Markov sampling, Least-squares regularized regression with dependent samples andq-penalty, Regularized least square regression with dependent samples, The consistency of least-square regularized regression with negative association sequence, Generalization and learning rate of multi-class support vector classification and regression, Spectral algorithms for learning with dependent observations, Generalization bounds of ERM algorithm with Markov chain samples, Learning performance of Tikhonov regularization algorithm with geometrically beta-mixing observations, Regression learning with non-identically and non-independently sampling, Concentration estimates for learning with unbounded sampling, Generalization bounds of ERM algorithm with \(V\)-geometrically ergodic Markov chains, Consistency of support vector machines using additive kernels for additive models, Regularized least-squares regression: learning from a sequence, Fast learning from \(\alpha\)-mixing observations, Learning Theory Estimates with Observations from General Stationary Stochastic Processes, Classification with non-i.i.d. sampling, Indefinite kernel network with \(l^q\)-norm regularization, Prediction of time series by statistical learning: general losses and fast rates, Least-square regularized regression with non-iid sampling, A note on application of integral operator in learning theory, Consistent online Gaussian process regression without the sample complexity bottleneck, Optimal rate for support vector machine regression with Markov chain samples, INDEFINITE KERNEL NETWORK WITH DEPENDENT SAMPLING
Cites Work
- Unnamed Item
- Rates of convergence for empirical processes of stationary mixing sequences
- Learning and generalisation. With applications to neural networks.
- Best choices for regularization parameters in learning theory: on the bias-variance problem.
- Regularization networks and support vector machines
- The performance bounds of learning machines based on exponentially strongly mixing sequences
- Learning rates of least-square regularized regression
- Learning theory estimates via integral operators and their approximations
- On the mathematical foundations of learning
- Conditions for linear processes to be strong-mixing
- Mixing Conditions for Markov Chains
- Minimum complexity regression estimation with weakly dependent observations
- Shannon sampling and function reconstruction from point values
- Learning Theory