Benign overfitting in linear regression

From MaRDI portal
Revision as of 12:07, 8 February 2024 by Import240129110113 (talk | contribs) (Created automatically from import240129110113)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Publication:5073215

DOI10.1073/PNAS.1907378117zbMath1485.62085arXiv1906.11300OpenAlexW3018252856WikidataQ93214520 ScholiaQ93214520MaRDI QIDQ5073215

Philip M. Long, Alexander Tsigler, Bartlett, Peter L.

Publication date: 5 May 2022

Published in: Proceedings of the National Academy of Sciences (Search for Journal in Brave)

Full work available at URL: https://arxiv.org/abs/1906.11300






Related Items (71)

Canonical thresholding for nonsparse high-dimensional linear regressionMehler’s Formula, Branching Process, and Compositional Kernels of Deep Neural NetworksDouble Double Descent: On Generalization Errors in Transfer Learning between Linear Regression TasksDeep learning: a statistical viewpointFit without fear: remarkable mathematical phenomena of deep learning through the prism of interpolationNeural network approximationDeep learning for inverse problems. Abstracts from the workshop held March 7--13, 2021 (hybrid meeting)Surprises in high-dimensional ridgeless least squares interpolationGeneralization error of random feature and kernel methods: hypercontractivity and kernel matrix concentrationLearning curves of generic features maps for realistic datasets with a teacher-student model*Generalization error rates in kernel regression: the crossover from the noiseless to noisy regime*On the proliferation of support vectors in high dimensions*A precise high-dimensional asymptotic theory for boosting and minimum-\(\ell_1\)-norm interpolated classifiersDimensionality Reduction, Regularization, and Generalization in Overparameterized RegressionsBinary Classification of Gaussian Mixtures: Abundance of Support Vectors, Benign Overfitting, and RegularizationThe unreasonable effectiveness of deep learning in artificial intelligenceOverparameterization and Generalization Error: Weighted Trigonometric InterpolationWeighted random sampling and reconstruction in general multivariate trigonometric polynomial spacesBenefit of Interpolation in Nearest Neighbor AlgorithmsHARFE: hard-ridge random feature expansionA note on the prediction error of principal component regression in high dimensionsHigh dimensional binary classification under label shift: phase transition and regularizationOn the Inconsistency of Kernel Ridgeless Regression in Fixed DimensionsFree dynamics of feature learning processesA Universal Trade-off Between the Model Size, Test Loss, and Training Loss of Linear PredictorsTowards data augmentation in graph neural network: an overview and evaluationPAC-learning with approximate predictorsUnnamed ItemRandom neural networks in the infinite width limit as Gaussian processesA domain-theoretic framework for robustness analysis of neural networksHigh-Dimensional Analysis of Double Descent for Linear Regression with Random ProjectionsMeasuring Complexity of Learning Schemes Using Hessian-Schatten Total VariationA geometric view on the role of nonlinear feature maps in few-shot learningA Generalization Gap Estimation for Overparameterized Models via the Langevin Functional VarianceBenign Overfitting and Noisy FeaturesLearning ability of interpolating deep convolutional neural networksA Review of Process Optimization for Additive Manufacturing Based on Machine LearningDimension-free bounds for sums of dependent matrices and operators with heavy-tailed distributionsThe leave-worst-\(k\)-out criterion for cross validationBenign overfitting and adaptive nonparametric regressionQuantitative limit theorems and bootstrap approximations for empirical spectral projectorsOver-parametrized deep neural networks minimizing the empirical risk do not generalize wellUnnamed ItemUnnamed ItemUnnamed ItemDistributed SGD in overparametrized linear regressionA moment-matching approach to testable learning and a new characterization of Rademacher complexitySame root different leaves: time series and cross-sectional methods in panel dataEstimation of Linear Functionals in High-Dimensional Linear Models: From Sparsity to NonsparsityThe common intuition to transfer learning can win or lose: case studies for linear regressionConvergence analysis for over-parameterized deep learningFluctuations, bias, variance and ensemble of learners: exact asymptotics for convex losses in high-dimensionRedundant representations help generalization in wide neural networksNew equivalences between interpolation and SVMs: kernels and structured featuresDouble data piling: a high-dimensional solution for asymptotically perfect multi-category classificationDeep networks for system identification: a surveyHigh-dimensional dynamics of generalization error in neural networksDouble data piling leads to perfect classificationAn elementary analysis of ridge regression with random designGeneralization Error of Minimum Weighted Norm and Kernel InterpolationDimension independent excess risk by stochastic gradient descentImplicit Regularization and Momentum Algorithms in Nonlinearly Parameterized Adaptive Control and PredictionOn the robustness of minimum norm interpolators and regularized empirical risk minimizersUnnamed ItemUnnamed ItemAdaBoost and robust one-bit compressed sensingA Unifying Tutorial on Approximate Message PassingThe interpolation phase transition in neural networks: memorization and generalization under lazy trainingA random matrix analysis of random Fourier features: beyond the Gaussian kernel, a precise phase transition, and the corresponding double descent*For interpolating kernel machines, minimizing the norm of the ERM solution maximizes stabilityTwo Models of Double Descent for Weak Features




Cites Work




This page was built for publication: Benign overfitting in linear regression