scientific article
From MaRDI portal
Publication:3093358
zbMath1222.68270MaRDI QIDQ3093358
Sayan Mukherjee, Ding-Xuan Zhou
Publication date: 12 October 2011
Full work available at URL: http://www.jmlr.org/papers/v7/mukherjee06a.html
Title: zbMATH Open Web Interface contents unavailable due to conflicting licenses.
Classification and discrimination; cluster analysis (statistical aspects) (62H30) Learning and adaptive systems in artificial intelligence (68T05)
Related Items (37)
Online regularized learning with pairwise loss functions ⋮ Normal estimation on manifolds by gradient learning ⋮ Hermite learning with gradient data ⋮ On the robustness of regularized pairwise learning methods based on kernels ⋮ Distributed regression learning with coefficient regularization ⋮ A linear functional strategy for regularized ranking ⋮ Learning gradients on manifolds ⋮ Learning gradients via an early stopping gradient descent method ⋮ A gradient enhanced \(\ell_{1}\)-minimization for sparse approximation of polynomial chaos expansions ⋮ Estimating variable structure and dependence in multitask learning via gradients ⋮ Structure learning via unstructured kernel-based M-estimation ⋮ Learning sparse gradients for variable selection and dimension reduction ⋮ LEARNING GRADIENTS FROM NONIDENTICAL DATA ⋮ Semi-supervised learning with the help of Parzen windows ⋮ Bias corrected regularization kernel method in ranking ⋮ Refined Generalization Bounds of Gradient Learning over Reproducing Kernel Hilbert Spaces ⋮ Online Pairwise Learning Algorithms ⋮ Query-dependent ranking and its asymptotic properties ⋮ Convergence analysis of an empirical eigenfunction-based ranking algorithm with truncated sparsity ⋮ Reproducing kernel Hilbert spaces associated with analytic translation-invariant Mercer kernels ⋮ Unregularized online learning algorithms with general loss functions ⋮ A Gradient-Enhanced L1 Approach for the Recovery of Sparse Trigonometric Polynomials ⋮ Learning gradients by a gradient descent algorithm ⋮ The convergence rate of a regularized ranking algorithm ⋮ Approximation analysis of gradient descent algorithm for bipartite ranking ⋮ Learning the coordinate gradients ⋮ On empirical eigenfunction-based ranking with \(\ell^1\) norm regularization ⋮ On the convergence rate and some applications of regularized ranking algorithms ⋮ Regularized Nyström subsampling in regression and ranking problems under general smoothness assumptions ⋮ Online regularized pairwise learning with least squares loss ⋮ Performance analysis of the LapRSSLG algorithm in learning theory ⋮ Modeling interactive components by coordinate kernel polynomial models ⋮ On extension theorems and their connection to universal consistency in machine learning ⋮ Debiased magnitude-preserving ranking: learning rate and bias characterization ⋮ High order Parzen windows and randomized sampling ⋮ Gradient learning in a classification setting by gradient descent ⋮ Space partitioning and regression maxima seeking via a mean-shift-inspired algorithm
This page was built for publication: