Learning rate of magnitude-preserving regularization ranking with dependent samples
From MaRDI portal
Publication:2800842
DOI10.1142/S0219691316500016zbMATH Open1333.68227MaRDI QIDQ2800842FDOQ2800842
Authors: Hong Chen
Publication date: 18 April 2016
Published in: International Journal of Wavelets, Multiresolution and Information Processing (Search for Journal in Brave)
Recommendations
- The convergence rate of a regularized ranking algorithm
- Learning rates for regularized least squares ranking algorithm
- On ranking and generalization bounds
- Debiased magnitude-preserving ranking: learning rate and bias characterization
- Generalization bounds for ranking algorithms via algorithmic stability
Cites Work
- Learning Theory
- Generalization bounds for ranking algorithms via algorithmic stability
- Learning theory approach to minimum error entropy criterion
- Best choices for regularization parameters in learning theory: on the bias-variance problem.
- ONLINE LEARNING WITH MARKOV SAMPLING
- Learning and generalisation. With applications to neural networks.
- On ranking and generalization bounds
- An Alternative Ranking Problem for Search Engines
- Extreme learning machine for ranking: generalization analysis and applications
Cited In (4)
This page was built for publication: Learning rate of magnitude-preserving regularization ranking with dependent samples
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2800842)