The following pages link to Are Loss Functions All the Same? (Q4832479):
Displaying 22 items.
- On the need for structure modelling in sequence prediction (Q331679) (← links)
- The learning rate of \(l_2\)-coefficient regularized classification with strong loss (Q383667) (← links)
- Analysis of support vector machines regression (Q1022433) (← links)
- Risk-sensitive loss functions for sparse multi-category classification problems (Q1031681) (← links)
- Local Rademacher complexity: sharper risk bounds with and without unlabeled samples (Q1669081) (← links)
- Dropout training for SVMs with data augmentation (Q1713848) (← links)
- Good edit similarity learning by loss minimization (Q1945121) (← links)
- Selection dynamics for deep neural networks (Q2003969) (← links)
- A statistical learning assessment of Huber regression (Q2054280) (← links)
- Functional linear regression with Huber loss (Q2099272) (← links)
- Learning rates of kernel-based robust classification (Q2157879) (← links)
- Genuinely distributed Byzantine machine learning (Q2166362) (← links)
- Optimizing predictive precision in imbalanced datasets for actionable revenue change prediction (Q2184072) (← links)
- An efficient primal dual prox method for non-smooth optimization (Q2339936) (← links)
- A random block-coordinate Douglas-Rachford splitting method with low computational complexity for binary logistic regression (Q2419533) (← links)
- Nonasymptotic analysis of robust regression with modified Huber's loss (Q2693696) (← links)
- Analysis of Regression Algorithms with Unbounded Sampling (Q3386411) (← links)
- A Framework of Learning Through Empirical Gain Maximization (Q5004380) (← links)
- Levenberg-Marquardt multi-classification using hinge loss function (Q6055118) (← links)
- Optimal shrinkage estimation of predictive densities under \(\alpha\)-divergences (Q6117926) (← links)
- Tensor networks in machine learning (Q6160060) (← links)
- A supervised fuzzy measure learning algorithm for combining classifiers (Q6492551) (← links)