Getting away with more network pruning: from sparsity to geometry and linear regions
From MaRDI portal
Publication:6057261
DOI10.1007/978-3-031-33271-5_14arXiv2301.07966OpenAlexW4377231229MaRDI QIDQ6057261
No author found.
Publication date: 4 October 2023
Published in: Integration of Constraint Programming, Artificial Intelligence, and Operations Research (Search for Journal in Brave)
Full work available at URL: https://arxiv.org/abs/2301.07966
Combinatorial optimization (90C27) Problem solving in the context of artificial intelligence (heuristics, search strategies, etc.) (68T20) Operations research and management science (90Bxx)
Cites Work
- Unnamed Item
- Deep neural networks and mixed integer linear optimization
- Multilayer feedforward networks are universal approximators
- Lossless compression of deep neural networks
- Error bounds for approximations with deep ReLU networks
- On mathematical programming with indicator constraints
- Facing up to arrangements: face-count formulas for partitions of space by hyperplanes
- Sensitivity-Informed Provable Pruning of Neural Networks
- JANOS: An Integrated Predictive and Prescriptive Modeling Framework
- Sharp Bounds for the Number of Regions of Maxout Networks and Vertices of Minkowski Sums
- Approximation by superpositions of a sigmoidal function
- Strong mixed-integer programming formulations for trained neural networks
This page was built for publication: Getting away with more network pruning: from sparsity to geometry and linear regions