Are Loss Functions All the Same?

From MaRDI portal
Revision as of 03:23, 8 February 2024 by Import240129110113 (talk | contribs) (Created automatically from import240129110113)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Publication:4832479


DOI10.1162/089976604773135104zbMath1089.68109WikidataQ34311744 ScholiaQ34311744MaRDI QIDQ4832479

Alessandro Verri, Lorenzo Rosasco, Michele Piana, Ernesto De Vito, Andrea Caponnetto

Publication date: 4 January 2005

Published in: Neural Computation (Search for Journal in Brave)

Full work available at URL: https://doi.org/10.1162/089976604773135104


68T05: Learning and adaptive systems in artificial intelligence


Related Items

SVM Soft Margin Classifiers: Linear Programming versus Quadratic Programming, Unnamed Item, Incremental proximal gradient scheme with penalization for constrained composite convex optimization problems, A Framework of Learning Through Empirical Gain Maximization, Accelerate stochastic subgradient method by leveraging local growth condition, Levenberg-Marquardt multi-classification using hinge loss function, Tensor networks in machine learning, On the need for structure modelling in sequence prediction, The learning rate of \(l_2\)-coefficient regularized classification with strong loss, Analysis of support vector machines regression, Risk-sensitive loss functions for sparse multi-category classification problems, Local Rademacher complexity: sharper risk bounds with and without unlabeled samples, Dropout training for SVMs with data augmentation, Good edit similarity learning by loss minimization, Selection dynamics for deep neural networks, A statistical learning assessment of Huber regression, Functional linear regression with Huber loss, Learning rates of kernel-based robust classification, Genuinely distributed Byzantine machine learning, Optimizing predictive precision in imbalanced datasets for actionable revenue change prediction, An efficient primal dual prox method for non-smooth optimization, A random block-coordinate Douglas-Rachford splitting method with low computational complexity for binary logistic regression, Nonasymptotic analysis of robust regression with modified Huber's loss, Analysis of Regression Algorithms with Unbounded Sampling



Cites Work