Iterative Regularization for Learning with Convex Loss Functions
From MaRDI portal
Publication:2810891
zbMath1360.68689arXiv1503.08985MaRDI QIDQ2810891
Ding-Xuan Zhou, Lorenzo Rosasco, Jun Hong Lin
Publication date: 6 June 2016
Full work available at URL: https://arxiv.org/abs/1503.08985
Nonparametric regression and quantile regression (62G08) Classification and discrimination; cluster analysis (statistical aspects) (62H30) Convex programming (90C25) Learning and adaptive systems in artificial intelligence (68T05) Hilbert spaces with reproducing kernels (= (proper) functional Hilbert spaces, including de Branges-Rovnyak and other structured spaces) (46E22)
Related Items (11)
Block coordinate type methods for optimization and learning ⋮ Graph-Dependent Implicit Regularisation for Distributed Stochastic Subgradient Descent ⋮ Gradient descent for robust kernel-based regression ⋮ Kernel-based maximum correntropy criterion with gradient descent method ⋮ Unnamed Item ⋮ Analysis of Online Composite Mirror Descent Algorithm ⋮ Kernel gradient descent algorithm for information theoretic learning ⋮ Modified Fejér sequences and applications ⋮ Online pairwise learning algorithms with convex loss functions ⋮ Unnamed Item ⋮ From inexact optimization to learning via gradient concentration
This page was built for publication: Iterative Regularization for Learning with Convex Loss Functions