The convergence rate of a regularized ranking algorithm
From MaRDI portal
Learning and adaptive systems in artificial intelligence (68T05) Sampling theory, sample surveys (62D05) Asymptotic properties of parametric tests (62F05) Computational learning theory (68Q32) Hilbert spaces with reproducing kernels (= (proper) functional Hilbert spaces, including de Branges-Rovnyak and other structured spaces) (46E22)
Recommendations
- Learning rates for regularized least squares ranking algorithm
- On the convergence rate and some applications of regularized ranking algorithms
- Generalization bounds for ranking algorithms via algorithmic stability
- scientific article; zbMATH DE number 7295804
- Convergence analysis of online algorithms
Cites work
- Application of integral operator for regularized least-square regression
- Generalization bounds for ranking algorithms via algorithmic stability
- Generalization bounds for the area under the ROC curve
- Learning coordinate covariances via gradients
- Learning theory estimates via integral operators and their approximations
- Margin-based ranking and an equivalence between AdaBoost and RankBoost
- Ranking and empirical minimization of \(U\)-statistics
- Shannon sampling. II: Connections to learning theory
- Statistical Analysis of Bayes Optimal Subset Ranking
- The \(p\)-norm push: a simple convex ranking algorithm that concentrates at the top of the list
- Theory of Reproducing Kernels
Cited in
(36)- Analysis of regularized least squares ranking with centered reproducing kernel
- Learning rate of support vector machine for ranking
- Regularized Nyström subsampling in regression and ranking problems under general smoothness assumptions
- Approximation analysis of gradient descent algorithm for bipartite ranking
- Debiased magnitude-preserving ranking: learning rate and bias characterization
- Extreme learning machine for ranking: generalization analysis and applications
- \(\ell^{1}\)-norm support vector machine for ranking with exponentially strongly mixing sequence
- Pairwise learning problems with regularization networks and Nyström subsampling approach
- HITS Can Converge Slowly, but Not Too Slowly, in Score and Rank
- Convergence analysis of an empirical eigenfunction-based ranking algorithm with truncated sparsity
- Learning rate of magnitude-preserving regularization ranking with dependent samples
- Learning theory of minimum error entropy under weak moment conditions
- Advances in Intelligent Data Analysis VI
- Error analysis of kernel regularized pairwise learning with a strongly convex loss
- Learning rates for regularized least squares ranking algorithm
- On empirical eigenfunction-based ranking with \(\ell^1\) norm regularization
- Robust pairwise learning with Huber loss
- On the convergence rate and some applications of regularized ranking algorithms
- Generalization bounds for ranking algorithms via algorithmic stability
- Distributed spectral pairwise ranking algorithms
- Lavrentiev regularization method in rank learning
- Kernel gradient descent algorithm for information theoretic learning
- The \(p\)-norm push: a simple convex ranking algorithm that concentrates at the top of the list
- A linear functional strategy for regularized ranking
- On the convergence rate of kernel-based sequential greedy regression
- Analysis of convergence performance of neural networks ranking algorithm
- Ranking with a P-Norm Push
- Bias corrected regularization kernel method in ranking
- Regularized Nyström Subsampling in Covariate Shift Domain Adaptation Problems
- Regularized ranking with convex losses and \(\ell^1\)-penalty
- Generalization performance of bipartite ranking algorithms with convex losses
- The framework of manifold regularization of ranking on graphs
- The \(\mathrm{r}\)-\(\mathrm{d}\) class predictions in linear mixed models
- Coefficient-based regularization network with variance loss for error
- Optimality of regularized least squares ranking with imperfect kernels
- Moduli of smoothness, \(K\)-functionals and Jackson-type inequalities associated with Kernel function approximation in learning theory
This page was built for publication: The convergence rate of a regularized ranking algorithm
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q692563)