Debiased magnitude-preserving ranking: learning rate and bias characterization
DOI10.1016/j.jmaa.2020.123881zbMath1436.68294OpenAlexW3001822681WikidataQ126328257 ScholiaQ126328257MaRDI QIDQ777111
Publication date: 3 July 2020
Published in: Journal of Mathematical Analysis and Applications (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.jmaa.2020.123881
integral operatorconvergence ratereproducing kernel Hilbert spacebias correctionsampling operatormagnitude-preserving rankingMPRank
Nonparametric estimation (62G05) Learning and adaptive systems in artificial intelligence (68T05) Hilbert spaces with reproducing kernels (= (proper) functional Hilbert spaces, including de Branges-Rovnyak and other structured spaces) (46E22) Statistical ranking and selection procedures (62F07)
Related Items (1)
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Learning rate of support vector machine for ranking
- The convergence rate of a regularized ranking algorithm
- Multi-kernel regularized classifiers
- A note on application of integral operator in learning theory
- An efficient algorithm for learning to rank from preference graphs
- Extreme learning machine for ranking: generalization analysis and applications
- Application of integral operator for regularized least-square regression
- Ranking and empirical minimization of \(U\)-statistics
- Shannon sampling. II: Connections to learning theory
- Learning theory estimates via integral operators and their approximations
- Learning Theory
- Bias corrected regularization kernel method in ranking
- Learning rates for regularized least squares ranking algorithm
- U-Processes and Preference Learning
- Theory of Reproducing Kernels
- On the convergence rate and some applications of regularized ranking algorithms
This page was built for publication: Debiased magnitude-preserving ranking: learning rate and bias characterization