scientific article; zbMATH DE number 7307487
From MaRDI portal
Publication:5149257
Kay Giesecke, Enguerrand Horel
Publication date: 8 February 2021
Full work available at URL: https://arxiv.org/abs/1902.06021
Title: zbMATH Open Web Interface contents unavailable due to conflicting licenses.
neural networknonlinear regressionnonparametric regressionfeature selectionsignificance testmodel interpretability
Related Items
Asymptotic properties of neural network sieve estimators ⋮ Variable Selection Via Thompson Sampling ⋮ Asset pricing with neural networks: significance tests ⋮ A goodness-of-fit test based on neural network sieve estimators ⋮ Reinforcement learning and stochastic optimisation
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Likelihood Ratio Tests for Model Selection and Non-Nested Hypotheses
- Cube root asymptotics
- Convergence rate of sieve estimates
- On methods of sieves and penalization
- Multilayer feedforward networks are universal approximators
- Weak convergence and empirical processes. With applications to statistics
- Bracketing metric entropy rates and empirical central limit theorems for function classes of Besov- and Sobolev-type
- Asymptotic Statistics
- NONPARAMETRIC SIGNIFICANCE TESTING
- Sieve Extremum Estimates for Weakly Dependent Data
- Improved rates and asymptotic normality for nonparametric neural network estimators
- Nonparametric Selection of Regressors: The Nonnested Case
- Consistent Model Specification Tests: Omitted Variables and Semiparametric Functional Forms
- Applied Nonparametric Econometrics
- Bootstrap non-parametric significance test
- Some Asymptotic Results for Learning in Single Hidden-Layer Feedforward Network Models
- Nonparametric Inferences for Additive Models
- Approximation by superpositions of a sigmoidal function