The convergence rate of a regularized ranking algorithm
DOI10.1016/J.JAT.2012.09.001zbMATH Open1252.68225OpenAlexW2034320649MaRDI QIDQ692563FDOQ692563
Authors: Hong Chen
Publication date: 6 December 2012
Published in: Journal of Approximation Theory (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.jat.2012.09.001
Recommendations
- Learning rates for regularized least squares ranking algorithm
- On the convergence rate and some applications of regularized ranking algorithms
- Generalization bounds for ranking algorithms via algorithmic stability
- scientific article; zbMATH DE number 7295804
- Convergence analysis of online algorithms
Learning and adaptive systems in artificial intelligence (68T05) Sampling theory, sample surveys (62D05) Asymptotic properties of parametric tests (62F05) Computational learning theory (68Q32) Hilbert spaces with reproducing kernels (= (proper) functional Hilbert spaces, including de Branges-Rovnyak and other structured spaces) (46E22)
Cites Work
- Theory of Reproducing Kernels
- Ranking and empirical minimization of \(U\)-statistics
- Generalization bounds for ranking algorithms via algorithmic stability
- Margin-based ranking and an equivalence between AdaBoost and RankBoost
- Generalization bounds for the area under the ROC curve
- Shannon sampling. II: Connections to learning theory
- Learning theory estimates via integral operators and their approximations
- Application of integral operator for regularized least-square regression
- Learning coordinate covariances via gradients
- Statistical Analysis of Bayes Optimal Subset Ranking
- The \(p\)-norm push: a simple convex ranking algorithm that concentrates at the top of the list
Cited In (36)
- Extreme learning machine for ranking: generalization analysis and applications
- Kernel gradient descent algorithm for information theoretic learning
- Lavrentiev regularization method in rank learning
- The \(p\)-norm push: a simple convex ranking algorithm that concentrates at the top of the list
- Moduli of smoothness, \(K\)-functionals and Jackson-type inequalities associated with Kernel function approximation in learning theory
- Analysis of regularized least squares ranking with centered reproducing kernel
- Ranking with a P-Norm Push
- The framework of manifold regularization of ranking on graphs
- On empirical eigenfunction-based ranking with \(\ell^1\) norm regularization
- A linear functional strategy for regularized ranking
- Bias corrected regularization kernel method in ranking
- Approximation analysis of gradient descent algorithm for bipartite ranking
- Advances in Intelligent Data Analysis VI
- Regularized Nyström subsampling in regression and ranking problems under general smoothness assumptions
- Learning rate of magnitude-preserving regularization ranking with dependent samples
- Learning rate of support vector machine for ranking
- Convergence analysis of an empirical eigenfunction-based ranking algorithm with truncated sparsity
- Learning rates for regularized least squares ranking algorithm
- Error analysis of kernel regularized pairwise learning with a strongly convex loss
- On the convergence rate of kernel-based sequential greedy regression
- Generalization performance of bipartite ranking algorithms with convex losses
- Debiased magnitude-preserving ranking: learning rate and bias characterization
- Pairwise learning problems with regularization networks and Nyström subsampling approach
- On the convergence rate and some applications of regularized ranking algorithms
- Coefficient-based regularization network with variance loss for error
- Generalization bounds for ranking algorithms via algorithmic stability
- Analysis of convergence performance of neural networks ranking algorithm
- Regularized ranking with convex losses and \(\ell^1\)-penalty
- Distributed spectral pairwise ranking algorithms
- \(\ell^{1}\)-norm support vector machine for ranking with exponentially strongly mixing sequence
- The \(\mathrm{r}\)-\(\mathrm{d}\) class predictions in linear mixed models
- Learning theory of minimum error entropy under weak moment conditions
- Optimality of regularized least squares ranking with imperfect kernels
- Robust pairwise learning with Huber loss
- HITS Can Converge Slowly, but Not Too Slowly, in Score and Rank
- Regularized Nyström Subsampling in Covariate Shift Domain Adaptation Problems
This page was built for publication: The convergence rate of a regularized ranking algorithm
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q692563)