A linear functional strategy for regularized ranking
DOI10.1016/J.NEUNET.2015.08.012zbMATH Open1394.68295DBLPjournals/nn/KriukovaPPT16OpenAlexW2165591566WikidataQ50551937 ScholiaQ50551937MaRDI QIDQ1669294FDOQ1669294
Authors: Galyna Kriukova, Oleksandra Panasiuk, Sergei V. Pereverzyev, Pavlo Tkachenko
Publication date: 30 August 2018
Published in: Neural Networks (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.neunet.2015.08.012
Recommendations
- On the convergence rate and some applications of regularized ranking algorithms
- Regularized ranking with convex losses and \(\ell^1\)-penalty
- The convergence rate of a regularized ranking algorithm
- Learning rates for regularized least squares ranking algorithm
- On empirical eigenfunction-based ranking with \(\ell^1\) norm regularization
Learning and adaptive systems in artificial intelligence (68T05) Medical applications (general) (92C50)
Cites Work
- Regularization theory for ill-posed problems. Selected topics
- Geometry of linear ill-posed problems in variable Hilbert scales
- How general are general source conditions?
- Title not available (Why is that?)
- Advances in large-margin classifiers
- Learning the kernel function via regularization
- Generalization bounds for ranking algorithms via algorithmic stability
- 10.1162/1532443041827916
- Learning theory estimates via integral operators and their approximations
- Adaptive kernel methods using the balancing principle
- Cross-validation based adaptation for regularization operators in learning theory
- On regularization algorithms in learning theory
- Learning coordinate covariances via gradients
- Title not available (Why is that?)
- The convergence rate of a regularized ranking algorithm
- Adaptive estimation of linear functionals in Hilbert scales from indirect white noise observa\-tions
- Local solutions to inverse problems in geodesy. The impact of the noise covariance structure upon the accuracy of estimation
- Direct estimation of linear functionals from indirect noisy observations
- Convergence analysis of an empirical eigenfunction-based ranking algorithm with truncated sparsity
- Subset Ranking Using Regression
Cited In (16)
- Learning linear ranking functions for beam search with application to planning
- Title not available (Why is that?)
- A new look at the automatic synthesis of linear ranking functions
- Manifold regularization based on Nyström type subsampling
- Levenberg-Marquardt multi-classification using hinge loss function
- Regularized Nyström subsampling in regression and ranking problems under general smoothness assumptions
- Online regularized pairwise learning with least squares loss
- Nyström type subsampling analyzed as a regularized projection
- Fast rates of minimum error entropy with heavy-tailed noise
- Title not available (Why is that?)
- Pairwise learning problems with regularization networks and Nyström subsampling approach
- On the convergence rate and some applications of regularized ranking algorithms
- Multi-task learning via linear functional strategy
- On a regularization of unsupervised domain adaptation in RKHS
- Distributed spectral pairwise ranking algorithms
- The \(\mathrm{r}\)-\(\mathrm{d}\) class predictions in linear mixed models
This page was built for publication: A linear functional strategy for regularized ranking
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q1669294)