Bias corrected regularization kernel method in ranking
From MaRDI portal
Publication:4615656
Recommendations
- The convergence rate of a regularized ranking algorithm
- Learning rates for regularized least squares ranking algorithm
- Debiased magnitude-preserving ranking: learning rate and bias characterization
- On the convergence rate and some applications of regularized ranking algorithms
- Learning theory of distributed regression with bias corrected regularization kernel network
Cites work
- scientific article; zbMATH DE number 1288298 (Why is no real title available?)
- scientific article; zbMATH DE number 1332320 (Why is no real title available?)
- Application of integral operator for regularized least-square regression
- Best choices for regularization parameters in learning theory: on the bias-variance problem.
- Communication-efficient sparse regression
- Concentration inequalities for random fields via coupling
- Confidence Intervals and Hypothesis Testing for High-Dimensional Regression
- Confidence intervals for low dimensional parameters in high dimensional linear models
- Deep distributed convolutional neural networks: universality
- Distributed learning with regularized least squares
- Divide and conquer kernel ridge regression: a distributed algorithm with minimax optimal rates
- ESTIMATING THE APPROXIMATION ERROR IN LEARNING THEORY
- Error bounds for learning the kernel
- Generalization bounds for ranking algorithms via algorithmic stability
- Generalization bounds for the area under the ROC curve
- Generalization performance of bipartite ranking algorithms with convex losses
- Generalization performance of magnitude-preserving semi-supervised ranking with graph-based regularization
- Indefinite kernel network with dependent sampling
- Learning coordinate covariances via gradients
- Learning rates for regularized least squares ranking algorithm
- Learning theory estimates via integral operators and their approximations
- Learning theory of distributed regression with bias corrected regularization kernel network
- On regularization algorithms in learning theory
- On the convergence rate and some applications of regularized ranking algorithms
- Ranking and empirical minimization of \(U\)-statistics
- Ranking the best instances
- Regularization networks with indefinite kernels
- Shannon sampling. II: Connections to learning theory
- Statistical significance in high-dimensional linear models
- The convergence rate of a regularized ranking algorithm
- Theory of Reproducing Kernels
Cited in
(4)- Analysis of regularized least squares ranking with centered reproducing kernel
- Debiased magnitude-preserving ranking: learning rate and bias characterization
- Bounding the difference between RankRC and RankSVM and application to multi-level rare class kernel ranking
- Optimality of regularized least squares ranking with imperfect kernels
This page was built for publication: Bias corrected regularization kernel method in ranking
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q4615656)