Provably training overparameterized neural network classifiers with non-convex constraints
From MaRDI portal
Publication:2106783
DOI10.1214/22-EJS2036MaRDI QIDQ2106783
Mladen Kolar, You-Lin Chen, Zhaoran Wang
Publication date: 19 December 2022
Published in: Electronic Journal of Statistics (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2012.15274
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Corrigendum to: ``On the complexity of finding first-order critical points in constrained nonlinear optimization
- A selective overview of deep learning
- Gradient descent optimizes over-parameterized deep ReLU networks
- Stochastic first-order methods for convex and nonconvex functional constrained optimization
- Fairness through awareness
- A Dynamic Near-Optimal Algorithm for Online Linear Programming
- Online Learning and Online Convex Optimization
- Stochastic Model-Based Minimization of Weakly Convex Functions
- Non-convex Optimization for Machine Learning
- Agnostic Learning of Monomials by Halfspaces Is Hard
- Online Linear Programming: Dual Convergence, New Algorithms, and Regret Bounds
- Tensor Canonical Correlation Analysis With Convergence and Statistical Guarantees
- Optimization with Non-Differentiable Constraints with Applications to Fairness, Recall, Churn, and Other Goals
- Wide neural networks of any depth evolve as linear models under gradient descent *
- Advancing subgroup fairness via sleeping experts
- An adaptive stochastic sequential quadratic programming with differentiable exact augmented Lagrangians
This page was built for publication: Provably training overparameterized neural network classifiers with non-convex constraints