Linearized two-layers neural networks in high dimension (Q2039801)
From MaRDI portal
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | Linearized two-layers neural networks in high dimension |
scientific article |
Statements
Linearized two-layers neural networks in high dimension (English)
0 references
5 July 2021
0 references
The authors study nonparametric regression problems for univariate responses \(y_1, \ldots, y_n\) and \(\mathbb{R}^d\)-valued feature vectors \(\mathbf{x}_1, \hdots, \mathbf{x}_n\), where the tuples \((y_i, \mathbf{x}_i)_{1 \leq i \leq n}\) are assumed to be stochastically independent and identically distributed. Their goal is to construct a function \(f: \mathbb{R}^d \to \mathbb{R}\) which predicts future responses. The quality of such an \(f\) is assessed via its square prediction risk. In particular, the authors consider choosing \(f\) from the class \(\mathcal{F}_{\text{NN}}\) of two-layer neural networks. An approximation (based on a first-order Taylor expansion) of \(f \in \mathcal{F}_{\text{NN}}\) by a part belonging to a random features model and a part belonging to a neural tangent class is studied. The approximation errors of both parts are analyzed under different asymptotic regimes in which \(n\) and/or \(d\) tend to infinity. Furthermore, the generalization error of certain kernel methods is analyzed. Besides these theoretical contributions, the authors also present some numerical results.
0 references
approximation bounds
0 references
kernel ridge regression
0 references
neural tangent class
0 references
random features
0 references
0 references
0 references
0 references