scientific article; zbMATH DE number 6982943
From MaRDI portal
Publication:4558516
zbMath1468.68183arXiv1609.07061MaRDI QIDQ4558516
Yoshua Bengio, Daniel Soudry, Matthieu Courbariaux, Itay Hubara, Ran El-Yaniv
Publication date: 22 November 2018
Full work available at URL: https://arxiv.org/abs/1609.07061
Title: zbMATH Open Web Interface contents unavailable due to conflicting licenses.
computer visiondeep learninglanguage modelsenergy-efficient neural networksneural networks compression
Artificial neural networks and deep learning (68T07) Machine vision and scene understanding (68T45) Neural nets and related approaches to inference from stochastic processes (62M45)
Related Items
Active Subspace of Neural Networks: Structural Analysis and Universal Attacks, Towards Compact Neural Networks via End-to-End Training: A Bayesian Tensor Approach with Automatic Rank Determination, Pruning deep convolutional neural networks architectures with evolution strategy, Learning in the machine: random backpropagation and the deep learning channel, GXNOR-Net: training deep neural networks with ternary weights and activations without full-precision memory under a unified discretization framework, Neural network approximation: three hidden layers are enough, PAC-learning with approximate predictors, Stochastic quantization for learning accurate low-bit deep neural networks, Limitations of neural network training due to numerical instability of backpropagation, Pruning during training by network efficacy modeling, Optimization of sparsity-constrained neural networks as a mixed integer linear program, Neural logic rule layers, STDP-Compatible Approximation of Backpropagation in an Energy-Based Model, An SMT-based approach for verifying binarized neural networks, Simple Classification using Binary Data, Learning quantized neural nets by coarse gradient method for nonlinear classification, Rectified binary convolutional networks with generative adversarial learning, Analyzing and Accelerating the Bottlenecks of Training Deep SNNs With Backpropagation, Some open questions on morphological operators and representations in the deep learning era. A personal vision, BinaryRelax: A Relaxation Approach for Training Deep Neural Networks with Quantized Weights, Loss aware post-training quantization, Blended coarse gradient descent for full quantization of deep neural networks, Stochastic Markov gradient descent and training low-bit neural networks, Unnamed Item, Quantized convolutional neural networks through the lens of partial differential equations, Unnamed Item, Unnamed Item, Deep Network With Approximation Error Being Reciprocal of Width to Power of Square Root of Depth, Unnamed Item, Recurrence of optimum for training weight and activation quantized networks, Binary quantized network training with sharpness-aware minimization, On neural network equivalence checking using SMT solvers
Uses Software
Cites Work