Are Loss Functions All the Same?
From MaRDI portal
Publication:4832479
DOI10.1162/089976604773135104zbMATH Open1089.68109OpenAlexW2034365297WikidataQ34311744 ScholiaQ34311744MaRDI QIDQ4832479FDOQ4832479
Authors: Lorenzo Rosasco, Michele Piana, Alessandro Verri, Ernesto De Vito, Andrea Caponnetto
Publication date: 4 January 2005
Published in: Neural Computation (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1162/089976604773135104
Recommendations
Cites Work
- Regularization networks and support vector machines
- Theory of Reproducing Kernels
- On the mathematical foundations of learning
- The covering number in learning theory
- Best choices for regularization parameters in learning theory: on the bias-variance problem.
- Statistical properties and adaptive tuning of support vector machines
Cited In (40)
- Functional linear regression with Huber loss
- Convexity, Classification, and Risk Bounds
- Selection dynamics for deep neural networks
- Analysis of support vector machines regression
- Title not available (Why is that?)
- Analysis of regression algorithms with unbounded sampling
- An investigation for loss functions widely used in machine learning
- Some thoughts about the design of loss functions
- Analysis of loss functions in support vector machines
- Incremental proximal gradient scheme with penalization for constrained composite convex optimization problems
- Levenberg-Marquardt multi-classification using hinge loss function
- A random block-coordinate Douglas-Rachford splitting method with low computational complexity for binary logistic regression
- Bias of homotopic gradient descent for the hinge loss
- Learning rates of kernel-based robust classification
- On the need for structure modelling in sequence prediction
- Genuinely distributed Byzantine machine learning
- Risk-sensitive loss functions for sparse multi-category classification problems
- A statistical learning assessment of Huber regression
- An efficient primal dual prox method for non-smooth optimization
- Fast convergence rates of deep neural networks for classification
- Tensor networks in machine learning
- Loss functions
- A supervised fuzzy measure learning algorithm for combining classifiers
- SVM Soft Margin Classifiers: Linear Programming versus Quadratic Programming
- Good edit similarity learning by loss minimization
- Local Rademacher complexity: sharper risk bounds with and without unlabeled samples
- Classification with a reject option using a hinge loss
- Optimizing predictive precision in imbalanced datasets for actionable revenue change prediction
- Nonasymptotic analysis of robust regression with modified Huber's loss
- Double-well net for image segmentation
- Composite multiclass losses
- A Framework of Learning Through Empirical Gain Maximization
- The C-loss function for pattern classification
- Optimal shrinkage estimation of predictive densities under \(\alpha\)-divergences
- The learning rate of \(l_2\)-coefficient regularized classification with strong loss
- Loss functions for finite sets
- Accelerate stochastic subgradient method by leveraging local growth condition
- How to compare different loss functions and their risks
- Dropout training for SVMs with data augmentation
- A study on L2-loss (squared hinge-loss) multiclass SVM
This page was built for publication: Are Loss Functions All the Same?
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4832479)