The following pages link to (Q4558516):
Displaying 33 items.
- Learning in the machine: random backpropagation and the deep learning channel (Q1647946) (← links)
- Learning quantized neural nets by coarse gradient method for nonlinear classification (Q2050846) (← links)
- Rectified binary convolutional networks with generative adversarial learning (Q2054394) (← links)
- Some open questions on morphological operators and representations in the deep learning era. A personal vision (Q2061783) (← links)
- Loss aware post-training quantization (Q2071502) (← links)
- Stochastic Markov gradient descent and training low-bit neural networks (Q2073135) (← links)
- Quantized convolutional neural networks through the lens of partial differential equations (Q2079526) (← links)
- Recurrence of optimum for training weight and activation quantized networks (Q2105102) (← links)
- Binary quantized network training with sharpness-aware minimization (Q2111176) (← links)
- On neural network equivalence checking using SMT solvers (Q2112128) (← links)
- Pruning deep convolutional neural networks architectures with evolution strategy (Q2126266) (← links)
- GXNOR-Net: training deep neural networks with ternary weights and activations without full-precision memory under a unified discretization framework (Q2179802) (← links)
- Stochastic quantization for learning accurate low-bit deep neural networks (Q2193833) (← links)
- An SMT-based approach for verifying binarized neural networks (Q2233508) (← links)
- Blended coarse gradient descent for full quantization of deep neural networks (Q2319868) (← links)
- Analyzing and Accelerating the Bottlenecks of Training Deep SNNs With Backpropagation (Q3386449) (← links)
- Simple Classification using Binary Data (Q4614095) (← links)
- (Q4614113) (← links)
- Deep Network With Approximation Error Being Reciprocal of Width to Power of Square Root of Depth (Q5004339) (← links)
- Active Subspace of Neural Networks: Structural Analysis and Universal Attacks (Q5037556) (← links)
- Towards Compact Neural Networks via End-to-End Training: A Bayesian Tensor Approach with Automatic Rank Determination (Q5037563) (← links)
- (Q5148973) (← links)
- (Q5149034) (← links)
- (Q5159430) (← links)
- BinaryRelax: A Relaxation Approach for Training Deep Neural Networks with Quantized Weights (Q5230408) (← links)
- STDP-Compatible Approximation of Backpropagation in an Energy-Based Model (Q5380662) (← links)
- Neural network approximation: three hidden layers are enough (Q6054944) (← links)
- PAC-learning with approximate predictors (Q6103580) (← links)
- Limitations of neural network training due to numerical instability of backpropagation (Q6122651) (← links)
- Pruning during training by network efficacy modeling (Q6134335) (← links)
- Optimization of sparsity-constrained neural networks as a mixed integer linear program (Q6145048) (← links)
- Neural logic rule layers (Q6199740) (← links)
- Optimal re-materialization strategies for heterogeneous chains: how to train deep neural networks with limited memory (Q6604162) (← links)