Analysis of the rate of convergence of least squares neural network regression estimates in case of measurement errors
From MaRDI portal
Publication:553267
DOI10.1016/j.neunet.2010.11.003zbMath1218.62043WikidataQ51618304 ScholiaQ51618304MaRDI QIDQ553267
Publication date: 26 July 2011
Published in: Neural Networks (Search for Journal in Brave)
Full work available at URL: https://doi.org/10.1016/j.neunet.2010.11.003
62G08: Nonparametric regression and quantile regression
62G20: Asymptotic properties of nonparametric inference
62J02: General nonlinear regression
62M45: Neural nets and related approaches to inference from stochastic processes
Related Items
Analysis of least squares regression estimates in case of additional errors in the variables, Estimates on compressed neural networks regression, Analysis of convergence performance of neural networks ranking algorithm
Cites Work
- Nonparametric regression with errors in variables
- Probability theory. Translated from the German by Robert B. Burckel
- Multilayer feedforward networks are universal approximators
- A distribution-free theory of nonparametric regression
- Nonasymptotic bounds on the \(L_{2}\) error of neural network regression estimates
- Adaptive regression estimation with multilayer feedforward neural networks
- Nonparametric Regression Estimation in the Heteroscedastic Errors-in-Variables Problem
- Universal approximation bounds for superpositions of a sigmoidal function
- Nonparametric estimation via empirical risk minimization
- Nonparametric regression in the presence of measurement error
- Neural Network Learning
- An $L_{2}$-Boosting Algorithm for Estimation of a Regression Function
- A Design-Adaptive Local Polynomial Estimator for the Errors-in-Variables Problem
- Approximation by superpositions of a sigmoidal function
- Unnamed Item
- Unnamed Item
- Unnamed Item