The convergence rate of a regularized ranking algorithm
From MaRDI portal
Publication:692563
DOI10.1016/j.jat.2012.09.001zbMath1252.68225OpenAlexW2034320649MaRDI QIDQ692563
Publication date: 6 December 2012
Published in: Journal of Approximation Theory (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.jat.2012.09.001
Computational learning theory (68Q32) Sampling theory, sample surveys (62D05) Learning and adaptive systems in artificial intelligence (68T05) Hilbert spaces with reproducing kernels (= (proper) functional Hilbert spaces, including de Branges-Rovnyak and other structured spaces) (46E22) Asymptotic properties of parametric tests (62F05)
Related Items
Learning theory of minimum error entropy under weak moment conditions ⋮ Distributed spectral pairwise ranking algorithms ⋮ ℓ1-Norm support vector machine for ranking with exponentially strongly mixing sequence ⋮ A linear functional strategy for regularized ranking ⋮ On the convergence rate of kernel-based sequential greedy regression ⋮ Error analysis of kernel regularized pairwise learning with a strongly convex loss ⋮ Optimality of regularized least squares ranking with imperfect kernels ⋮ Learning rates for regularized least squares ranking algorithm ⋮ Regularized Nyström Subsampling in Covariate Shift Domain Adaptation Problems ⋮ Bias corrected regularization kernel method in ranking ⋮ Convergence analysis of an empirical eigenfunction-based ranking algorithm with truncated sparsity ⋮ Learning rate of support vector machine for ranking ⋮ Kernel gradient descent algorithm for information theoretic learning ⋮ Generalization performance of bipartite ranking algorithms with convex losses ⋮ Robust pairwise learning with Huber loss ⋮ Coefficient-based regularization network with variance loss for error ⋮ On empirical eigenfunction-based ranking with \(\ell^1\) norm regularization ⋮ On the convergence rate and some applications of regularized ranking algorithms ⋮ The \(\mathrm{r}\)-\(\mathrm{d}\) class predictions in linear mixed models ⋮ Regularized Nyström subsampling in regression and ranking problems under general smoothness assumptions ⋮ Regularized ranking with convex losses and \(\ell^1\)-penalty ⋮ Debiased magnitude-preserving ranking: learning rate and bias characterization ⋮ Extreme learning machine for ranking: generalization analysis and applications
Cites Work
- Application of integral operator for regularized least-square regression
- Ranking and empirical minimization of \(U\)-statistics
- Shannon sampling. II: Connections to learning theory
- Learning theory estimates via integral operators and their approximations
- Statistical Analysis of Bayes Optimal Subset Ranking
- Theory of Reproducing Kernels
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item