scientific article; zbMATH DE number 6276241
From MaRDI portal
Publication:5405253
zbMath1320.62096arXiv1208.0848MaRDI QIDQ5405253
Qiang Wu, Jun Fan, Ting Hu, Ding-Xuan Zhou
Publication date: 1 April 2014
Full work available at URL: https://arxiv.org/abs/1208.0848
Title: zbMATH Open Web Interface contents unavailable due to conflicting licenses.
Nonparametric regression and quantile regression (62G08) Asymptotic properties of nonparametric inference (62G20) Measures of information, entropy (94A17) Statistical aspects of information-theoretic topics (62B10)
Related Items (35)
On reproducing kernel and density problems ⋮ Learning theory of minimum error entropy under weak moment conditions ⋮ Block coordinate type methods for optimization and learning ⋮ Consistency analysis of an empirical minimum error entropy algorithm ⋮ A Statistical Learning Approach to Modal Regression ⋮ On the robustness of regularized pairwise learning methods based on kernels ⋮ Distributed regression learning with coefficient regularization ⋮ Unnamed Item ⋮ Fast rates of minimum error entropy with heavy-tailed noise ⋮ Distributed kernel gradient descent algorithm for minimum error entropy principle ⋮ Kernel-based sparse regression with the correntropy-induced loss ⋮ Error analysis on regularized regression based on the maximum correntropy criterion ⋮ Learning rates for regularized least squares ranking algorithm ⋮ The performance of semi-supervised Laplacian regularized regression with the least square loss ⋮ Refined Generalization Bounds of Gradient Learning over Reproducing Kernel Hilbert Spaces ⋮ Stability and optimization error of stochastic gradient descent for pairwise learning ⋮ Online minimum error entropy algorithm with unbounded sampling ⋮ Convergence analysis of an empirical eigenfunction-based ranking algorithm with truncated sparsity ⋮ Kernel gradient descent algorithm for information theoretic learning ⋮ Robust pairwise learning with Huber loss ⋮ Unregularized online learning algorithms with general loss functions ⋮ Robust kernel-based distribution regression ⋮ On empirical eigenfunction-based ranking with \(\ell^1\) norm regularization ⋮ Statistical analysis of the moving least-squares method with unbounded sampling ⋮ Learning rate of magnitude-preserving regularization ranking with dependent samples ⋮ Learning with correntropy-induced losses for regression with mixture of symmetric stable noise ⋮ Regularized Nyström subsampling in regression and ranking problems under general smoothness assumptions ⋮ On extension theorems and their connection to universal consistency in machine learning ⋮ Error bounds for learning the kernel ⋮ Debiased magnitude-preserving ranking: learning rate and bias characterization ⋮ Learning under \((1 + \epsilon)\)-moment conditions ⋮ New Insights Into Learning With Correntropy-Based Regression ⋮ A Framework of Learning Through Empirical Gain Maximization ⋮ Optimal learning with Gaussians and correntropy loss ⋮ Extreme learning machine for ranking: generalization analysis and applications
This page was built for publication: