scientific article; zbMATH DE number 7370646
From MaRDI portal
Publication:4999109
Niladri S. Chatterji, Philip M. Long
Publication date: 9 July 2021
Full work available at URL: https://arxiv.org/abs/2004.12019
Title: zbMATH Open Web Interface contents unavailable due to conflicting licenses.
classificationhigh-dimensional statisticsrisk boundsfinite-sample analysisclass-conditional gaussians
Related Items
Surprises in high-dimensional ridgeless least squares interpolation ⋮ A precise high-dimensional asymptotic theory for boosting and minimum-\(\ell_1\)-norm interpolated classifiers ⋮ Binary Classification of Gaussian Mixtures: Abundance of Support Vectors, Benign Overfitting, and Regularization ⋮ Benefit of Interpolation in Nearest Neighbor Algorithms ⋮ Unnamed Item ⋮ On the robustness of minimum norm interpolators and regularized empirical risk minimizers ⋮ Unnamed Item
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Risk bounds for statistical learning
- High-dimensional classification using features annealed independence rules
- Estimation of dependences based on empirical data. Transl. from the Russian by Samuel Kotz
- Toward efficient agnostic learning
- High-dimensional asymptotics of prediction: ridge regression and classification
- Some theory for Fisher's linear discriminant function, `naive Bayes', and some alternatives when there are many more variables than observations
- Random classification noise defeats all convex potential boosters
- The phase transition for the existence of the maximum likelihood estimate in high-dimensional logistic regression
- Just interpolate: kernel ``ridgeless regression can generalize
- L1 least squares for sparse high-dimensional LDA
- Statistical performance of support vector machines
- Higher criticism thresholding: Optimal feature selection when useful features are rare and weak
- Impossibility of successful classification when useful features are rare and weak
- Efficient noise-tolerant learning from statistical queries
- Sample-efficient strategies for learning in the presence of noise
- The Power of Localization for Efficiently Learning Linear Separators with Noise
- A Direct Estimation Approach to Sparse Linear Discriminant Analysis
- Agnostically Learning Halfspaces
- High-Dimensional Probability
- Structural risk minimization over data-dependent hierarchies
- Balls and bins: A study in negative dependence
- Chernoff–Hoeffding Bounds for Applications with Limited Independence
- Two Models of Double Descent for Weak Features
- Benign overfitting in linear regression
- Classification with imperfect training labels
- High Dimensional Linear Discriminant Analysis: Optimality, Adaptive Algorithm and Missing Data
- Reconciling modern machine-learning practice and the classical bias–variance trade-off
- A modern maximum-likelihood theory for high-dimensional logistic regression
- Enumeration of Seven-Argument Threshold Functions
- Boosting in the presence of noise
This page was built for publication: