scientific article

From MaRDI portal
Publication:3093358

zbMath1222.68270MaRDI QIDQ3093358

Sayan Mukherjee, Ding-Xuan Zhou

Publication date: 12 October 2011

Full work available at URL: http://www.jmlr.org/papers/v7/mukherjee06a.html

Title: zbMATH Open Web Interface contents unavailable due to conflicting licenses.



Related Items (37)

Online regularized learning with pairwise loss functionsNormal estimation on manifolds by gradient learningHermite learning with gradient dataOn the robustness of regularized pairwise learning methods based on kernelsDistributed regression learning with coefficient regularizationA linear functional strategy for regularized rankingLearning gradients on manifoldsLearning gradients via an early stopping gradient descent methodA gradient enhanced \(\ell_{1}\)-minimization for sparse approximation of polynomial chaos expansionsEstimating variable structure and dependence in multitask learning via gradientsStructure learning via unstructured kernel-based M-estimationLearning sparse gradients for variable selection and dimension reductionLEARNING GRADIENTS FROM NONIDENTICAL DATASemi-supervised learning with the help of Parzen windowsBias corrected regularization kernel method in rankingRefined Generalization Bounds of Gradient Learning over Reproducing Kernel Hilbert SpacesOnline Pairwise Learning AlgorithmsQuery-dependent ranking and its asymptotic propertiesConvergence analysis of an empirical eigenfunction-based ranking algorithm with truncated sparsityReproducing kernel Hilbert spaces associated with analytic translation-invariant Mercer kernelsUnregularized online learning algorithms with general loss functionsA Gradient-Enhanced L1 Approach for the Recovery of Sparse Trigonometric PolynomialsLearning gradients by a gradient descent algorithmThe convergence rate of a regularized ranking algorithmApproximation analysis of gradient descent algorithm for bipartite rankingLearning the coordinate gradientsOn empirical eigenfunction-based ranking with \(\ell^1\) norm regularizationOn the convergence rate and some applications of regularized ranking algorithmsRegularized Nyström subsampling in regression and ranking problems under general smoothness assumptionsOnline regularized pairwise learning with least squares lossPerformance analysis of the LapRSSLG algorithm in learning theoryModeling interactive components by coordinate kernel polynomial modelsOn extension theorems and their connection to universal consistency in machine learningDebiased magnitude-preserving ranking: learning rate and bias characterizationHigh order Parzen windows and randomized samplingGradient learning in a classification setting by gradient descentSpace partitioning and regression maxima seeking via a mean-shift-inspired algorithm




This page was built for publication: