scientific article; zbMATH DE number 6276241

From MaRDI portal
Publication:5405253

zbMath1320.62096arXiv1208.0848MaRDI QIDQ5405253

Qiang Wu, Jun Fan, Ting Hu, Ding-Xuan Zhou

Publication date: 1 April 2014

Full work available at URL: https://arxiv.org/abs/1208.0848

Title: zbMATH Open Web Interface contents unavailable due to conflicting licenses.



Related Items (35)

On reproducing kernel and density problemsLearning theory of minimum error entropy under weak moment conditionsBlock coordinate type methods for optimization and learningConsistency analysis of an empirical minimum error entropy algorithmA Statistical Learning Approach to Modal RegressionOn the robustness of regularized pairwise learning methods based on kernelsDistributed regression learning with coefficient regularizationUnnamed ItemFast rates of minimum error entropy with heavy-tailed noiseDistributed kernel gradient descent algorithm for minimum error entropy principleKernel-based sparse regression with the correntropy-induced lossError analysis on regularized regression based on the maximum correntropy criterionLearning rates for regularized least squares ranking algorithmThe performance of semi-supervised Laplacian regularized regression with the least square lossRefined Generalization Bounds of Gradient Learning over Reproducing Kernel Hilbert SpacesStability and optimization error of stochastic gradient descent for pairwise learningOnline minimum error entropy algorithm with unbounded samplingConvergence analysis of an empirical eigenfunction-based ranking algorithm with truncated sparsityKernel gradient descent algorithm for information theoretic learningRobust pairwise learning with Huber lossUnregularized online learning algorithms with general loss functionsRobust kernel-based distribution regressionOn empirical eigenfunction-based ranking with \(\ell^1\) norm regularizationStatistical analysis of the moving least-squares method with unbounded samplingLearning rate of magnitude-preserving regularization ranking with dependent samplesLearning with correntropy-induced losses for regression with mixture of symmetric stable noiseRegularized Nyström subsampling in regression and ranking problems under general smoothness assumptionsOn extension theorems and their connection to universal consistency in machine learningError bounds for learning the kernelDebiased magnitude-preserving ranking: learning rate and bias characterizationLearning under \((1 + \epsilon)\)-moment conditionsNew Insights Into Learning With Correntropy-Based RegressionA Framework of Learning Through Empirical Gain MaximizationOptimal learning with Gaussians and correntropy lossExtreme learning machine for ranking: generalization analysis and applications






This page was built for publication: