The convergence rate of a regularized ranking algorithm

From MaRDI portal
Publication:692563

DOI10.1016/j.jat.2012.09.001zbMath1252.68225OpenAlexW2034320649MaRDI QIDQ692563

Hong Chen

Publication date: 6 December 2012

Published in: Journal of Approximation Theory (Search for Journal in Brave)

Full work available at URL: https://doi.org/10.1016/j.jat.2012.09.001




Related Items

Learning theory of minimum error entropy under weak moment conditionsDistributed spectral pairwise ranking algorithms1-Norm support vector machine for ranking with exponentially strongly mixing sequenceA linear functional strategy for regularized rankingOn the convergence rate of kernel-based sequential greedy regressionError analysis of kernel regularized pairwise learning with a strongly convex lossOptimality of regularized least squares ranking with imperfect kernelsLearning rates for regularized least squares ranking algorithmRegularized Nyström Subsampling in Covariate Shift Domain Adaptation ProblemsBias corrected regularization kernel method in rankingConvergence analysis of an empirical eigenfunction-based ranking algorithm with truncated sparsityLearning rate of support vector machine for rankingKernel gradient descent algorithm for information theoretic learningGeneralization performance of bipartite ranking algorithms with convex lossesRobust pairwise learning with Huber lossCoefficient-based regularization network with variance loss for errorOn empirical eigenfunction-based ranking with \(\ell^1\) norm regularizationOn the convergence rate and some applications of regularized ranking algorithmsThe \(\mathrm{r}\)-\(\mathrm{d}\) class predictions in linear mixed modelsRegularized Nyström subsampling in regression and ranking problems under general smoothness assumptionsRegularized ranking with convex losses and \(\ell^1\)-penaltyDebiased magnitude-preserving ranking: learning rate and bias characterizationExtreme learning machine for ranking: generalization analysis and applications



Cites Work