Learning rates for regularized least squares ranking algorithm
From MaRDI portal
Publication:5356934
DOI10.1142/S0219530517500063zbMath1420.68182MaRDI QIDQ5356934
No author found.
Publication date: 12 September 2017
Published in: Analysis and Applications (Search for Journal in Brave)
General nonlinear regression (62J02) Learning and adaptive systems in artificial intelligence (68T05)
Related Items (18)
Learning theory of minimum error entropy under weak moment conditions ⋮ Distributed spectral pairwise ranking algorithms ⋮ Convergence of online pairwise regression learning with quadratic loss ⋮ Quantitative convergence analysis of kernel based large-margin unified machines ⋮ On the K-functional in learning theory ⋮ Error analysis of kernel regularized pairwise learning with a strongly convex loss ⋮ Optimality of regularized least squares ranking with imperfect kernels ⋮ Bias corrected regularization kernel method in ranking ⋮ Kernel gradient descent algorithm for information theoretic learning ⋮ Robust pairwise learning with Huber loss ⋮ Regularized Nyström subsampling in regression and ranking problems under general smoothness assumptions ⋮ Online regularized pairwise learning with least squares loss ⋮ Convergence analysis of distributed multi-penalty regularized pairwise learning ⋮ Semi-supervised learning with summary statistics ⋮ Distributed learning with indefinite kernels ⋮ Debiased magnitude-preserving ranking: learning rate and bias characterization ⋮ Optimal learning with Gaussians and correntropy loss ⋮ Comparison theorems on large-margin learning
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Learning with coefficient-based regularization and \(\ell^1\)-penalty
- Concentration estimates for learning with \(\ell ^{1}\)-regularizer and data dependent hypothesis spaces
- U-processes: Rates of convergence
- The convergence rate of a regularized ranking algorithm
- Multi-kernel regularized classifiers
- \(U\)-processes indexed by Vapnik-Červonenkis classes of functions with applications to asymptotics and bootstrap of \(U\)-statistics with estimated parameters
- The covering number in learning theory
- Concentration estimates for learning with unbounded sampling
- An efficient algorithm for learning to rank from preference graphs
- Ranking and empirical minimization of \(U\)-statistics
- Learning rates of least-square regularized regression
- Learning Theory
- 10.1162/1532443041827916
- Probability Inequalities for Sums of Bounded Random Variables
- Learning Theory
- Learning Theory
This page was built for publication: Learning rates for regularized least squares ranking algorithm