Positive semidefinite metric learning using boosting-like algorithms
From MaRDI portal
Recommendations
Cited in
(13)- Generalization bounds for metric and similarity learning
- Extending the relevant component analysis algorithm for metric learning using both positive and negative equivalence constraints
- Learning a distance metric from relative comparisons between quadruplets of images
- Distance metric learning with eigenvalue optimization
- \(\mathrm{OPM^2L}\): an optimal instance partition-based multi-metric learning method for heterogeneous dataset classification
- Tuning of the hyperparameters for \(L2\)-loss SVMs with the RBF kernel by the maximum-margin principle and the jackknife technique
- Conditional Gradient Methods for Convex Optimization with General Affine and Nonlinear Constraints
- A boosting approach for supervised Mahalanobis distance metric learning
- Efficient distance metric learning by adaptive sampling and mini-batch stochastic gradient descent (SGD)
- Conditional gradient sliding for convex optimization
- Supervised distance metric learning through maximization of the Jeffrey divergence
- Structured learning of binary codes with column generation for optimizing ranking measures
- Joint distance and similarity measure learning based on triplet-based constraints
This page was built for publication: Positive semidefinite metric learning using boosting-like algorithms
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5405153)