Binary classification of Gaussian mixtures: abundance of support vectors, benign overfitting, and regularization
From MaRDI portal
Publication:5065474
Abstract: Deep neural networks generalize well despite being exceedingly overparameterized and being trained without explicit regularization. This curious phenomenon has inspired extensive research activity in establishing its statistical principles: Under what conditions is it observed? How do these depend on the data and on the training algorithm? When does regularization benefit generalization? While such questions remain wide open for deep neural nets, recent works have attempted gaining insights by studying simpler, often linear, models. Our paper contributes to this growing line of work by examining binary linear classification under a generative Gaussian mixture model. Motivated by recent results on the implicit bias of gradient descent, we study both max-margin SVM classifiers (corresponding to logistic loss) and min-norm interpolating classifiers (corresponding to least-squares loss). First, we leverage an idea introduced in [V. Muthukumar et al., arXiv:2005.08054, (2020)] to relate the SVM solution to the min-norm interpolating solution. Second, we derive novel non-asymptotic bounds on the classification error of the latter. Combining the two, we present novel sufficient conditions on the covariance spectrum and on the signal-to-noise ratio (SNR) under which interpolating estimators achieve asymptotically optimal performance as overparameterization increases. Interestingly, our results extend to a noisy model with constant probability noise flips. Contrary to previously studied discriminative data models, our results emphasize the crucial role of the SNR and its interplay with the data covariance. Finally, via a combination of analytical arguments and numerical demonstrations we identify conditions under which the interpolating estimator performs better than corresponding regularized estimates.
Recommendations
- Benign overfitting in linear regression
- Just interpolate: kernel ``ridgeless regression can generalize
- Deep learning: a statistical viewpoint
- Large scale analysis of generalization error in learning using margin based classification methods
- Overparameterization and generalization error: weighted trigonometric interpolation
Cites work
- scientific article; zbMATH DE number 7370646 (Why is no real title available?)
- scientific article; zbMATH DE number 7306870 (Why is no real title available?)
- scientific article; zbMATH DE number 7415102 (Why is no real title available?)
- A modern maximum-likelihood theory for high-dimensional logistic regression
- A random matrix analysis of random Fourier features: beyond the Gaussian kernel, a precise phase transition, and the corresponding double descent*
- Benign overfitting in linear regression
- Convexity, Classification, and Risk Bounds
- Deep double descent: where bigger models and more data hurt*
- Deep learning
- High-dimensional probability. An introduction with applications in data science
- High-dimensional statistics. A non-asymptotic viewpoint
- Introduction to algorithms.
- Precise Error Analysis of Regularized <inline-formula> <tex-math notation="LaTeX">$M$ </tex-math> </inline-formula>-Estimators in High Dimensions
- Reconciling modern machine-learning practice and the classical bias-variance trade-off
- The implicit bias of gradient descent on separable data
- Two models of double descent for weak features
Cited in
(4)
This page was built for publication: Binary classification of Gaussian mixtures: abundance of support vectors, benign overfitting, and regularization
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q5065474)