INDEFINITE KERNEL NETWORK WITH DEPENDENT SAMPLING
From MaRDI portal
Publication:2855474
DOI10.1142/S0219530513500206zbMath1281.68194OpenAlexW2059739409MaRDI QIDQ2855474
Publication date: 25 October 2013
Published in: Analysis and Applications (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1142/s0219530513500206
consistencyregularizationindefinite kernelregression learning{\(\alpha\)}-mixing conditionkernel network
Learning and adaptive systems in artificial intelligence (68T05) Fourier and Fourier-Stieltjes transforms and other transforms of Fourier type (42B10) Data structures (68P05)
Related Items (13)
Distributed learning with partial coefficients regularization ⋮ An oracle inequality for regularized risk minimizers with strongly mixing observations ⋮ Kernel-based sparse regression with the correntropy-induced loss ⋮ Learning theory approach to a system identification problem involving atomic norm ⋮ Bias corrected regularization kernel method in ranking ⋮ Convergence analysis of an empirical eigenfunction-based ranking algorithm with truncated sparsity ⋮ A simpler approach to coefficient regularized support vector machines regression ⋮ Indefinite kernel network with \(l^q\)-norm regularization ⋮ Constructive analysis for coefficient regularization regression algorithms ⋮ On empirical eigenfunction-based ranking with \(\ell^1\) norm regularization ⋮ Coefficient-based regression with non-identical unbounded sampling ⋮ Generalized support vector regression: Duality and tensor-kernel representation ⋮ Learning rates of regression with q-norm loss and threshold
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Optimal learning rates for least squares regularized regression with unbounded sampling
- Least square regression with indefinite kernels and coefficient regularization
- Regularized least square regression with dependent samples
- Support vector machine classification with indefinite kernels
- Regularization networks with indefinite kernels
- The generalization performance of ERM algorithm with strongly mixing observations
- Regularization networks and support vector machines
- Optimal rates for the regularized least-squares algorithm
- Learning rates of regularized regression for exponentially strongly mixing sequence
- Learning theory estimates via integral operators and their approximations
- On the mathematical foundations of learning
- ONLINE REGRESSION WITH VARYING GAUSSIANS AND NON-IDENTICAL DISTRIBUTIONS
- Learning Theory
- CROSS-VALIDATION BASED ADAPTATION FOR REGULARIZATION OPERATORS IN LEARNING THEORY
- Minimum complexity regression estimation with weakly dependent observations
- Leave-One-Out Bounds for Kernel Methods
- Supercritical age-dependent branching processes with immigration
This page was built for publication: INDEFINITE KERNEL NETWORK WITH DEPENDENT SAMPLING