Learning rate of magnitude-preserving regularization ranking with dependent samples
From MaRDI portal
Publication:2800842
Recommendations
- The convergence rate of a regularized ranking algorithm
- Learning rates for regularized least squares ranking algorithm
- On ranking and generalization bounds
- Debiased magnitude-preserving ranking: learning rate and bias characterization
- Generalization bounds for ranking algorithms via algorithmic stability
Cites work
- An Alternative Ranking Problem for Search Engines
- Best choices for regularization parameters in learning theory: on the bias-variance problem.
- Extreme learning machine for ranking: generalization analysis and applications
- Generalization bounds for ranking algorithms via algorithmic stability
- Learning Theory
- Learning and generalisation. With applications to neural networks.
- Learning theory approach to minimum error entropy criterion
- ONLINE LEARNING WITH MARKOV SAMPLING
- On ranking and generalization bounds
Cited in
(4)
This page was built for publication: Learning rate of magnitude-preserving regularization ranking with dependent samples
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2800842)