Publication:3093358

From MaRDI portal


zbMath1222.68270MaRDI QIDQ3093358

Sayan Mukherjee, Ding-Xuan Zhou

Publication date: 12 October 2011

Full work available at URL: http://www.jmlr.org/papers/v7/mukherjee06a.html


62H30: Classification and discrimination; cluster analysis (statistical aspects)

68T05: Learning and adaptive systems in artificial intelligence


Related Items

Bias corrected regularization kernel method in ranking, Regularized Nyström subsampling in regression and ranking problems under general smoothness assumptions, A Gradient-Enhanced L1 Approach for the Recovery of Sparse Trigonometric Polynomials, Online regularized pairwise learning with least squares loss, Performance analysis of the LapRSSLG algorithm in learning theory, LEARNING GRADIENTS FROM NONIDENTICAL DATA, Refined Generalization Bounds of Gradient Learning over Reproducing Kernel Hilbert Spaces, Online Pairwise Learning Algorithms, On the convergence rate and some applications of regularized ranking algorithms, On the robustness of regularized pairwise learning methods based on kernels, Estimating variable structure and dependence in multitask learning via gradients, Learning sparse gradients for variable selection and dimension reduction, Unregularized online learning algorithms with general loss functions, Learning gradients on manifolds, Learning gradients via an early stopping gradient descent method, Semi-supervised learning with the help of Parzen windows, The convergence rate of a regularized ranking algorithm, Learning the coordinate gradients, Debiased magnitude-preserving ranking: learning rate and bias characterization, Hermite learning with gradient data, Reproducing kernel Hilbert spaces associated with analytic translation-invariant Mercer kernels, High order Parzen windows and randomized sampling, Gradient learning in a classification setting by gradient descent, Distributed regression learning with coefficient regularization, A linear functional strategy for regularized ranking, Query-dependent ranking and its asymptotic properties, Convergence analysis of an empirical eigenfunction-based ranking algorithm with truncated sparsity, Approximation analysis of gradient descent algorithm for bipartite ranking, Modeling interactive components by coordinate kernel polynomial models, Space partitioning and regression maxima seeking via a mean-shift-inspired algorithm, On empirical eigenfunction-based ranking with \(\ell^1\) norm regularization, Online regularized learning with pairwise loss functions, A gradient enhanced \(\ell_{1}\)-minimization for sparse approximation of polynomial chaos expansions, Learning gradients by a gradient descent algorithm, On extension theorems and their connection to universal consistency in machine learning, Normal estimation on manifolds by gradient learning