Analysis of convergence performance of neural networks ranking algorithm
From MaRDI portal
Publication:1942699
DOI10.1016/j.neunet.2012.06.012zbMath1258.68130WikidataQ45960347 ScholiaQ45960347MaRDI QIDQ1942699
Publication date: 13 March 2013
Published in: Neural Networks (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.neunet.2012.06.012
68T05: Learning and adaptive systems in artificial intelligence
Related Items
Generalization ability of fractional polynomial models, Extreme learning machine for ranking: generalization analysis and applications
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Analysis of the rate of convergence of least squares neural network regression estimates in case of measurement errors
- Interpolation and rates of convergence for a class of neural networks
- Approximation by Ridge functions and neural networks with one hidden layer
- Approximation by superposition of sigmoidal and radial basis functions
- A note on different covering numbers in learning theory.
- A distribution-free theory of nonparametric regression
- Best choices for regularization parameters in learning theory: on the bias-variance problem.
- Ranking and empirical minimization of \(U\)-statistics
- Nonasymptotic bounds on the \(L_{2}\) error of neural network regression estimates
- Simultaneous \(\mathbf L^p\)-approximation order for neural networks
- On the mathematical foundations of learning
- Universal approximation bounds for superpositions of a sigmoidal function
- Generalization performance of regularization networks and support vector machines via entropy numbers of compact operators
- 10.1162/1532443041827916
- Subset Ranking Using Regression
- An Alternative Ranking Problem for Search Engines
- Learning Theory
- On the Uniform Convergence of Relative Frequencies of Events to Their Probabilities
- Approximation by superpositions of a sigmoidal function