Provably training overparameterized neural network classifiers with non-convex constraints
From MaRDI portal
Publication:2106783
Cites work
- scientific article; zbMATH DE number 1046019 (Why is no real title available?)
- scientific article; zbMATH DE number 2023357 (Why is no real title available?)
- scientific article; zbMATH DE number 7064055 (Why is no real title available?)
- scientific article; zbMATH DE number 5060482 (Why is no real title available?)
- A dynamic near-optimal algorithm for online linear programming
- A selective overview of deep learning
- Advancing subgroup fairness via sleeping experts
- An adaptive stochastic sequential quadratic programming with differentiable exact augmented Lagrangians
- Corrigendum to: ``On the complexity of finding first-order critical points in constrained nonlinear optimization
- Fairness through awareness
- Following the leader and fast rates in online linear prediction: curved constraint sets and other regularities
- Gradient descent optimizes over-parameterized deep ReLU networks
- Non-convex optimization for machine learning
- Online Linear Programming: Dual Convergence, New Algorithms, and Regret Bounds
- Online learning and online convex optimization
- Optimization with Non-Differentiable Constraints with Applications to Fairness, Recall, Churn, and Other Goals
- Stochastic first-order methods for convex and nonconvex functional constrained optimization
- Stochastic model-based minimization of weakly convex functions
- Tensor Canonical Correlation Analysis With Convergence and Statistical Guarantees
- Wide neural networks of any depth evolve as linear models under gradient descent *
Cited in
(3)
This page was built for publication: Provably training overparameterized neural network classifiers with non-convex constraints
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q2106783)