scientific article
From MaRDI portal
Publication:3973919
zbMath0739.62001MaRDI QIDQ3973919
Publication date: 26 June 1992
Title: zbMATH Open Web Interface contents unavailable due to conflicting licenses.
consistencymodel selectionrates of convergencepolynomial regressionartificial neural networksapproximation errorparameter estimation errorrisk boundsstatistical estimation of functionscomplexity of modelscomplexity regularization criteriaindex of resolvabilityminimum description-length criterianear asymptotic optimality
Density estimation (62G07) Asymptotic properties of nonparametric inference (62G20) Applications of statistics (62P99) Statistical aspects of information-theoretic topics (62B10)
Related Items
How well can a regression function be estimated if the distribution of the (random) design is concentrated on a finite set? ⋮ Optimal aggregation of classifiers in statistical learning. ⋮ Data-adaptive estimation of the treatment-specific mean ⋮ Neural networks and logistic regression: Part I ⋮ Model selection by bootstrap penalization for classification ⋮ Suboptimal behavior of Bayes and MDL in classification under misspecification ⋮ Quasicycles revisited: apparent sensitivity to initial conditions ⋮ Synchronous Boltzmann machines can be universal approximators ⋮ Model selection in nonparametric regression ⋮ Deep ReLU neural network approximation in Bochner spaces and applications to parametric PDEs ⋮ Nonlinear orthogonal series estimates for random design regression ⋮ Model selection in reinforcement learning ⋮ About the non-asymptotic behaviour of Bayes estimators ⋮ Collocation approximation by deep neural ReLU networks for parametric and stochastic PDEs with lognormal inputs ⋮ Multiclass classification with potential function rules: margin distribution and generalization ⋮ Deep learning in high dimension: Neural network expression rates for generalized polynomial chaos expansions in UQ ⋮ Deep ReLU networks and high-order finite element methods ⋮ Analysis of the rate of convergence of fully connected deep neural network regression estimates with smooth activation function ⋮ Ridgelets: estimating with ridge functions ⋮ On estimation of surrogate models for multivariate computer experiments ⋮ Model selection based on minimum description length ⋮ A new method for estimation and model selection: \(\rho\)-estimation ⋮ Discriminatively regularized least-squares classification ⋮ Approximation and learning by greedy algorithms ⋮ On the mathematical foundations of learning ⋮ A multi-loss super regression learner (MSRL) with application to survival prediction using proteomics ⋮ Analysis of the rate of convergence of least squares neural network regression estimates in case of measurement errors ⋮ Density estimation by the penalized combinatorial method ⋮ Nonasymptotic bounds on the \(L_{2}\) error of neural network regression estimates ⋮ Theory of Classification: a Survey of Some Recent Advances ⋮ On the rate of convergence of fully connected deep neural network regression estimates ⋮ Gaussian model selection with an unknown variance ⋮ On deep learning as a remedy for the curse of dimensionality in nonparametric regression ⋮ Hybrid Machine Learning Model for Continuous Microarray Time Series ⋮ Nonparametric estimation of low rank matrix valued function ⋮ Information-theoretic determination of minimax rates of convergence ⋮ Unnamed Item ⋮ Smooth discrimination analysis ⋮ Functional aggregation for nonparametric regression. ⋮ Adaptive estimation in autoregression or \(\beta\)-mixing regression via model selection ⋮ Interpreting neural-network results: a simulation study.