The following pages link to (Q2880888):
Displaying 40 items.
- One-pass AUC optimization (Q286076) (← links)
- On the robustness of regularized pairwise learning methods based on kernels (Q325147) (← links)
- Supervised multidimensional scaling for visualization, classification, and bipartite ranking (Q452683) (← links)
- Generalization performance of bipartite ranking algorithms with convex losses (Q488690) (← links)
- Unregularized online learning algorithms with general loss functions (Q504379) (← links)
- Fast generalization rates for distance metric learning. Improved theoretical analysis for smooth strongly convex distance metric learning (Q669277) (← links)
- The convergence rate of a regularized ranking algorithm (Q692563) (← links)
- Debiased magnitude-preserving ranking: learning rate and bias characterization (Q777111) (← links)
- A linear functional strategy for regularized ranking (Q1669294) (← links)
- Convergence analysis of an empirical eigenfunction-based ranking algorithm with truncated sparsity (Q1722329) (← links)
- Approximation analysis of gradient descent algorithm for bipartite ranking (Q1760585) (← links)
- Analysis of convergence performance of neural networks ranking algorithm (Q1942699) (← links)
- Learning to rank on graphs (Q1959630) (← links)
- Robust pairwise learning with Huber loss (Q1979426) (← links)
- Generalization ability of online pairwise support vector machine (Q1996328) (← links)
- Stability analysis of learning algorithms for ontology similarity computation (Q2016684) (← links)
- Randomized smoothing variance reduction method for large-scale non-smooth convex optimization (Q2033403) (← links)
- The \(\mathrm{r}\)-\(\mathrm{d}\) class predictions in linear mixed models (Q2048216) (← links)
- Convergence of online pairwise regression learning with quadratic loss (Q2191834) (← links)
- On empirical eigenfunction-based ranking with \(\ell^1\) norm regularization (Q2256621) (← links)
- Online pairwise learning algorithms with convex loss functions (Q2293252) (← links)
- Extreme learning machine for ranking: generalization analysis and applications (Q2339398) (← links)
- Online regularized learning with pairwise loss functions (Q2361154) (← links)
- Learning rate of magnitude-preserving regularization ranking with dependent samples (Q2800842) (← links)
- On extension theorems and their connection to universal consistency in machine learning (Q2835986) (← links)
- The performance of semi-supervised Laplacian regularized regression with the least square loss (Q2980112) (← links)
- Bias corrected regularization kernel method in ranking (Q4615656) (← links)
- Regularized Nyström subsampling in regression and ranking problems under general smoothness assumptions (Q4968723) (← links)
- Analysis of <i>k</i>-partite ranking algorithm in area under the receiver operating characteristic curve criterion (Q5028602) (← links)
- Distributed spectral pairwise ranking algorithms (Q5060714) (← links)
- Online regularized pairwise learning with non-i.i.d. observations (Q5063226) (← links)
- Stability and optimization error of stochastic gradient descent for pairwise learning (Q5132230) (← links)
- Online regularized pairwise learning with least squares loss (Q5220066) (← links)
- Convergence analysis of distributed multi-penalty regularized pairwise learning (Q5220068) (← links)
- Learning rates for regularized least squares ranking algorithm (Q5356934) (← links)
- Online Pairwise Learning Algorithms (Q5380417) (← links)
- On the convergence rate and some applications of regularized ranking algorithms (Q5963450) (← links)
- Error analysis of kernel regularized pairwise learning with a strongly convex loss (Q6112862) (← links)
- Optimality of regularized least squares ranking with imperfect kernels (Q6125450) (← links)
- Pairwise learning problems with regularization networks and Nyström subsampling approach (Q6488738) (← links)