Are Loss Functions All the Same?
From MaRDI portal
Publication:4832479
DOI10.1162/089976604773135104zbMath1089.68109WikidataQ34311744 ScholiaQ34311744MaRDI QIDQ4832479
Alessandro Verri, Lorenzo Rosasco, Michele Piana, Ernesto De Vito, Andrea Caponnetto
Publication date: 4 January 2005
Published in: Neural Computation (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1162/089976604773135104
68T05: Learning and adaptive systems in artificial intelligence
Related Items
SVM Soft Margin Classifiers: Linear Programming versus Quadratic Programming, On the need for structure modelling in sequence prediction, The learning rate of \(l_2\)-coefficient regularized classification with strong loss, Analysis of support vector machines regression, Risk-sensitive loss functions for sparse multi-category classification problems, Good edit similarity learning by loss minimization, An efficient primal dual prox method for non-smooth optimization
Cites Work
- Best choices for regularization parameters in learning theory: on the bias-variance problem.
- The covering number in learning theory
- Regularization networks and support vector machines
- On the mathematical foundations of learning
- Theory of Reproducing Kernels
- Statistical properties and adaptive tuning of support vector machines