scientific article; zbMATH DE number 7255066
From MaRDI portal
Publication:4969074
Recommendations
- Learned-loss boosting
- Learning with varying insensitive loss
- Toward a unified approach to fitting loss models
- Learning without concentration for general loss functions
- On Bayesian learning via loss functions
- Learning with convex loss and indefinite kernels
- Large loss networks
- Optimal learning with Gaussians and correntropy loss
- An investigation for loss functions widely used in machine learning
Cites work
- scientific article; zbMATH DE number 1818892 (Why is no real title available?)
- scientific article; zbMATH DE number 47310 (Why is no real title available?)
- scientific article; zbMATH DE number 1332320 (Why is no real title available?)
- scientific article; zbMATH DE number 2107836 (Why is no real title available?)
- scientific article; zbMATH DE number 2116058 (Why is no real title available?)
- scientific article; zbMATH DE number 3231692 (Why is no real title available?)
- scientific article; zbMATH DE number 3285076 (Why is no real title available?)
- scientific article; zbMATH DE number 3296905 (Why is no real title available?)
- scientific article; zbMATH DE number 3062467 (Why is no real title available?)
- scientific article; zbMATH DE number 3095897 (Why is no real title available?)
- 10.1162/15324430260185628
- A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems
- A kernel two-sample test
- A primal-dual convergence analysis of boosting
- A shortest augmenting path algorithm for dense and sparse linear assignment problems
- Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization
- An O(n) algorithm for quadratic knapsack problems
- Bandit online optimization over the permutahedron
- Clustering with Bregman divergences.
- Composite binary losses
- Composite multiclass losses
- Concerning nonnegative matrices and doubly stochastic matrices
- Conditional gradient algorithms with open loop step size rules
- Convex Analysis
- Convex analysis and monotone operator theory in Hilbert spaces
- Convex analysis and nonlinear optimization. Theory and examples
- Convexity, Classification, and Risk Bounds
- Dynamic programming algorithm optimization for spoken word recognition
- Elicitation of Personal Probabilities and Expectations
- Error bounds for convolutional codes and an asymptotically optimum decoding algorithm
- Fast projection onto the simplex and the \(l_1\) ball
- Finding optimum branchings
- Finding the nearest point in A polytope
- Game theory, maximum entropy, minimum discrepancy and robust Bayesian decision theory
- Generalization of Shannon–Khinchin Axioms to Nonextensive Systems and the Uniqueness Theorem for the Nonextensive Entropy
- Graphical models, exponential families, and variational inference
- I-divergence geometry of probability distributions and minimization problems
- In defense of one-vs-all classification
- Information and exponential families in statistical theory
- Information geometry and its applications
- Information, divergence and risk for binary experiments
- Large margin methods for structured and interdependent output variables
- Learning permutations with exponential weights
- Learning using privileged information: SVM+ and weighted SVM
- Multiclass classification, information, divergence and surrogate risk
- Natural language processing (almost) from scratch
- Nonextensive information theoretic kernels on measures
- Numerical Optimization
- On ordered weighted averaging aggregation operators in multicriteria decisionmaking
- On surrogate loss functions and \(f\)-divergences
- On the consistency of multiclass classification methods
- On the equivalence of weak learnability and linear separability: new relaxations and efficient boosting algorithms
- On the limited memory BFGS method for large scale optimization
- Online learning of Nash equilibria in congestion games
- Online linear optimization over permutations
- Online prediction under submodular constraints
- Optimum branchings
- Possible generalization of Boltzmann-Gibbs statistics.
- Proximité et dualité dans un espace hilbertien
- Pseudo-Convex Functions
- Randomized online PCA algorithms with regret bounds that are logarithmic in the dimension
- Regularized optimal transport and the rot mover's distance
- Robust Estimation of a Location Parameter
- Scikit-learn: machine learning in Python
- Sharp uniform convexity and smoothness inequalities for trace norms
- Smooth minimization of non-smooth functions
- Smoothing and first order methods: a unified framework
- Sparse Reconstruction by Separable Approximation
- Statistical Inference for Probabilistic Functions of Finite State Markov Chains
- Strictly Proper Scoring Rules, Prediction, and Estimation
- The Theory of Max-Min, with Applications
- The complexity of computing the permanent
- The generalized simplex method for minimizing a linear form under linear inequality restraints
- Uncertainty, Information, and Sequential Experiments
- VC theory of large margin multi-category classifiers
- Value regularization and Fenchel duality
Cited in
(7)- Learned-loss boosting
- Variational representations of annealing paths: Bregman information under monotonic embedding
- PyEPO: a PyTorch-based end-to-end predict-then-optimize library for linear and integer programming
- Tutorial on Amortized Optimization
- Structured learning based heuristics to solve the single machine scheduling problem with release times and sum of completion times
- Cross-entropy loss for recommending efficient fold-over technique
- An investigation for loss functions widely used in machine learning
This page was built for publication:
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4969074)