scientific article; zbMATH DE number 7626727
From MaRDI portal
Publication:5053206
Publication date: 6 December 2022
Full work available at URL: https://arxiv.org/abs/2002.11219
Title: zbMATH Open Web Interface contents unavailable due to conflicting licenses.
Related Items (4)
Principled deep neural network training through linear programming ⋮ Efficient Global Optimization of Two-Layer ReLU Networks: Quadratic-Time Algorithms and Adversarial Training ⋮ Poster: Convex Scenario Optimisation for ReLU Networks ⋮ Unnamed Item
Uses Software
Cites Work
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Unnamed Item
- Probability in Banach spaces. Isoperimetry and processes
- Equivalence of minimal \(\ell _{0}\)- and \(\ell _{p }\)-norm solutions of linear equalities, inequalities and linear programs for sufficiently small \(p\)
- Particular formulae for the Moore--Penrose inverse of a columnwise partitioned matrix
- Bayesian learning for neural networks
- Decoding by Linear Programming
- Hardness of Learning Halfspaces with Noise
- Adaptive and Oblivious Randomized Subspace Methods for High-Dimensional Optimization: Sharp Analysis and Lower Bounds
- SuperMann: A Superlinearly Convergent Algorithm for Finding Fixed Points of Nonexpansive Operators
- Breaking the Curse of Dimensionality with Convex Neural Networks
- ℓ1 Regularization in Infinite Dimensional Feature Spaces
- For most large underdetermined systems of linear equations the minimal 𝓁1‐norm solution is also the sparsest solution
- Convex Analysis
- Wide neural networks of any depth evolve as linear models under gradient descent *
This page was built for publication: