Pages that link to "Item:Q5027013"
From MaRDI portal
The following pages link to Two Models of Double Descent for Weak Features (Q5027013):
Displaying 36 items.
- Dimension independent excess risk by stochastic gradient descent (Q2084455) (← links)
- The interpolation phase transition in neural networks: memorization and generalization under lazy training (Q2105197) (← links)
- Canonical thresholding for nonsparse high-dimensional linear regression (Q2119237) (← links)
- Surprises in high-dimensional ridgeless least squares interpolation (Q2131262) (← links)
- A precise high-dimensional asymptotic theory for boosting and minimum-\(\ell_1\)-norm interpolated classifiers (Q2148995) (← links)
- (Q4999109) (← links)
- (Q5053228) (← links)
- (Q5054595) (← links)
- Learning curves of generic features maps for realistic datasets with a teacher-student model* (Q5055409) (← links)
- Generalization error rates in kernel regression: the crossover from the noiseless to noisy regime* (Q5055412) (← links)
- Dimensionality Reduction, Regularization, and Generalization in Overparameterized Regressions (Q5065466) (← links)
- Binary Classification of Gaussian Mixtures: Abundance of Support Vectors, Benign Overfitting, and Regularization (Q5065474) (← links)
- Theoretical issues in deep networks (Q5073211) (← links)
- Benign overfitting in linear regression (Q5073215) (← links)
- Overparameterization and Generalization Error: Weighted Trigonometric Interpolation (Q5088865) (← links)
- Benefit of Interpolation in Nearest Neighbor Algorithms (Q5089734) (← links)
- (Q5159429) (← links)
- A Unifying Tutorial on Approximate Message Passing (Q5863992) (← links)
- Mehler’s Formula, Branching Process, and Compositional Kernels of Deep Neural Networks (Q5881138) (← links)
- Double Double Descent: On Generalization Errors in Transfer Learning between Linear Regression Tasks (Q5885828) (← links)
- Fit without fear: remarkable mathematical phenomena of deep learning through the prism of interpolation (Q5887828) (← links)
- High dimensional binary classification under label shift: phase transition and regularization (Q6062484) (← links)
- Large-dimensional random matrix theory and its applications in deep learning and wireless communications (Q6063730) (← links)
- On the Inconsistency of Kernel Ridgeless Regression in Fixed Dimensions (Q6070298) (← links)
- A Universal Trade-off Between the Model Size, Test Loss, and Training Loss of Linear Predictors (Q6090836) (← links)
- Overparameterized maximum likelihood tests for detection of sparse vectors (Q6137612) (← links)
- High-Dimensional Analysis of Double Descent for Linear Regression with Random Projections (Q6151666) (← links)
- Approximate spectral decomposition of Fisher information matrix for simple ReLU networks (Q6534979) (← links)
- The common intuition to transfer learning can win or lose: case studies for linear regression (Q6583519) (← links)
- Magnitude and angle dynamics in training single ReLU neurons (Q6587015) (← links)
- New equivalences between interpolation and SVMs: kernels and structured features (Q6617269) (← links)
- Double data piling: a high-dimensional solution for asymptotically perfect multi-category classification (Q6643296) (← links)
- Regularized zero-variance control variates (Q6650957) (← links)
- Deep networks for system identification: a survey (Q6659190) (← links)
- Kurdyka-Łojasiewicz exponent via Hadamard parametrization (Q6663111) (← links)
- Dropout drops double descent (Q6670075) (← links)