Pages that link to "Item:Q5073215"
From MaRDI portal
The following pages link to Benign overfitting in linear regression (Q5073215):
Displaying 50 items.
- Over-parametrized deep neural networks minimizing the empirical risk do not generalize well (Q1983625) (← links)
- High-dimensional dynamics of generalization error in neural networks (Q2057778) (← links)
- Double data piling leads to perfect classification (Q2074331) (← links)
- An elementary analysis of ridge regression with random design (Q2080945) (← links)
- Dimension independent excess risk by stochastic gradient descent (Q2084455) (← links)
- On the robustness of minimum norm interpolators and regularized empirical risk minimizers (Q2091842) (← links)
- AdaBoost and robust one-bit compressed sensing (Q2102435) (← links)
- The interpolation phase transition in neural networks: memorization and generalization under lazy training (Q2105197) (← links)
- Canonical thresholding for nonsparse high-dimensional linear regression (Q2119237) (← links)
- Deep learning for inverse problems. Abstracts from the workshop held March 7--13, 2021 (hybrid meeting) (Q2131206) (← links)
- Surprises in high-dimensional ridgeless least squares interpolation (Q2131262) (← links)
- Generalization error of random feature and kernel methods: hypercontractivity and kernel matrix concentration (Q2134105) (← links)
- A precise high-dimensional asymptotic theory for boosting and minimum-\(\ell_1\)-norm interpolated classifiers (Q2148995) (← links)
- Free dynamics of feature learning processes (Q2679634) (← links)
- The leave-worst-\(k\)-out criterion for cross validation (Q2693781) (← links)
- (Q4999109) (← links)
- Generalization Error of Minimum Weighted Norm and Kernel Interpolation (Q4999364) (← links)
- Implicit Regularization and Momentum Algorithms in Nonlinearly Parameterized Adaptive Control and Prediction (Q5004318) (← links)
- A random matrix analysis of random Fourier features: beyond the Gaussian kernel, a precise phase transition, and the corresponding double descent* (Q5020045) (← links)
- Two Models of Double Descent for Weak Features (Q5027013) (← links)
- (Q5053192) (← links)
- (Q5053228) (← links)
- (Q5054595) (← links)
- Learning curves of generic features maps for realistic datasets with a teacher-student model* (Q5055409) (← links)
- Generalization error rates in kernel regression: the crossover from the noiseless to noisy regime* (Q5055412) (← links)
- On the proliferation of support vectors in high dimensions* (Q5055426) (← links)
- Dimensionality Reduction, Regularization, and Generalization in Overparameterized Regressions (Q5065466) (← links)
- Binary Classification of Gaussian Mixtures: Abundance of Support Vectors, Benign Overfitting, and Regularization (Q5065474) (← links)
- The unreasonable effectiveness of deep learning in artificial intelligence (Q5073209) (← links)
- Overparameterization and Generalization Error: Weighted Trigonometric Interpolation (Q5088865) (← links)
- Weighted random sampling and reconstruction in general multivariate trigonometric polynomial spaces (Q5089731) (← links)
- Benefit of Interpolation in Nearest Neighbor Algorithms (Q5089734) (← links)
- (Q5159429) (← links)
- (Q5159434) (← links)
- A Unifying Tutorial on Approximate Message Passing (Q5863992) (← links)
- For interpolating kernel machines, minimizing the norm of the ERM solution maximizes stability (Q5873932) (← links)
- Mehler’s Formula, Branching Process, and Compositional Kernels of Deep Neural Networks (Q5881138) (← links)
- Double Double Descent: On Generalization Errors in Transfer Learning between Linear Regression Tasks (Q5885828) (← links)
- Deep learning: a statistical viewpoint (Q5887827) (← links)
- Fit without fear: remarkable mathematical phenomena of deep learning through the prism of interpolation (Q5887828) (← links)
- Neural network approximation (Q5887830) (← links)
- HARFE: hard-ridge random feature expansion (Q6049834) (← links)
- A note on the prediction error of principal component regression in high dimensions (Q6050280) (← links)
- High dimensional binary classification under label shift: phase transition and regularization (Q6062484) (← links)
- On the Inconsistency of Kernel Ridgeless Regression in Fixed Dimensions (Q6070298) (← links)
- A Universal Trade-off Between the Model Size, Test Loss, and Training Loss of Linear Predictors (Q6090836) (← links)
- Towards data augmentation in graph neural network: an overview and evaluation (Q6101311) (← links)
- PAC-learning with approximate predictors (Q6103580) (← links)
- Random neural networks in the infinite width limit as Gaussian processes (Q6138923) (← links)
- A domain-theoretic framework for robustness analysis of neural networks (Q6149907) (← links)