A new convex objective function for the supervised learning of single-layer neural networks
DOI10.1016/J.PATCOG.2009.11.024zbMATH Open1191.68499OpenAlexW2046251965WikidataQ58036772 ScholiaQ58036772MaRDI QIDQ962791FDOQ962791
Authors: Oscar Fontenla-Romero, Bertha Guijarro-Berdiñas, Beatriz Pérez-Sánchez, Amparo Alonso-Betanzos
Publication date: 7 April 2010
Published in: Pattern Recognition (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.patcog.2009.11.024
Recommendations
global optimumconvex optimizationleast squaresincremental learningsingle-layer neural networkssupervised learning method
Cites Work
- Title not available (Why is that?)
- Title not available (Why is that?)
- Multicategory proximal support vector machine classifiers
- Title not available (Why is that?)
- Relative loss bounds for multidimensional regression problems
- Fast recursive algorithms for a class of linear equations
- Complexity of Solving Linear Systems in Different Models of Computation
- Title not available (Why is that?)
- A Global Optimum Approach for One-Layer Neural Networks
Cited In (6)
- Ensemble decision forest of RBF networks via hybrid feature clustering approach for high-dimensional data classification
- Generalization bounds for non-stationary mixing processes
- Sequential complexities and uniform martingale laws of large numbers
- Nonlinear single layer neural network training algorithm for incremental, nonstationary and distributed learning scenarios
- Asymptotic sequential Rademacher complexity of a finite function class
- A joint optimization of momentum item and Levenberg-Marquardt algorithm to level up the BPNN's generalization ability
Uses Software
This page was built for publication: A new convex objective function for the supervised learning of single-layer neural networks
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q962791)