Bias corrected regularization kernel method in ranking
From MaRDI portal
Publication:4615656
DOI10.1142/S0219530518500161zbMath1442.68195OpenAlexW2883974910MaRDI QIDQ4615656
Publication date: 29 January 2019
Published in: Analysis and Applications (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1142/s0219530518500161
General nonlinear regression (62J02) Learning and adaptive systems in artificial intelligence (68T05)
Related Items
Optimality of regularized least squares ranking with imperfect kernels ⋮ Debiased magnitude-preserving ranking: learning rate and bias characterization
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Statistical significance in high-dimensional linear models
- Generalization performance of bipartite ranking algorithms with convex losses
- The convergence rate of a regularized ranking algorithm
- Concentration inequalities for random fields via coupling
- On regularization algorithms in learning theory
- Best choices for regularization parameters in learning theory: on the bias-variance problem.
- Regularization networks with indefinite kernels
- Application of integral operator for regularized least-square regression
- Ranking and empirical minimization of \(U\)-statistics
- Generalization performance of magnitude-preserving semi-supervised ranking with graph-based regularization
- Shannon sampling. II: Connections to learning theory
- Learning theory estimates via integral operators and their approximations
- Divide and Conquer Kernel Ridge Regression: A Distributed Algorithm with Minimax Optimal Rates
- Error bounds for learning the kernel
- INDEFINITE KERNEL NETWORK WITH DEPENDENT SAMPLING
- Confidence Intervals and Hypothesis Testing for High-Dimensional Regression
- ESTIMATING THE APPROXIMATION ERROR IN LEARNING THEORY
- Deep distributed convolutional neural networks: Universality
- Learning rates for regularized least squares ranking algorithm
- Confidence Intervals for Low Dimensional Parameters in High Dimensional Linear Models
- Theory of Reproducing Kernels
- On the convergence rate and some applications of regularized ranking algorithms