GXNOR-Net: training deep neural networks with ternary weights and activations without full-precision memory under a unified discretization framework (Q2179802): Difference between revisions

From MaRDI portal
Importer (talk | contribs)
Changed an Item
ReferenceBot (talk | contribs)
Changed an Item
Property / cites work
 
Property / cites work: Q4558516 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Efficient Associative Computation with Discrete Synapses / rank
 
Normal rank
Property / cites work
 
Property / cites work: Memory Capacities for Synaptic and Structural Plasticity / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q2934059 / rank
 
Normal rank

Revision as of 15:59, 22 July 2024

scientific article
Language Label Description Also known as
English
GXNOR-Net: training deep neural networks with ternary weights and activations without full-precision memory under a unified discretization framework
scientific article

    Statements

    GXNOR-Net: training deep neural networks with ternary weights and activations without full-precision memory under a unified discretization framework (English)
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    13 May 2020
    0 references
    GXNOR-Net
    0 references
    discrete state transition
    0 references
    ternary neural networks
    0 references
    sparse binary networks
    0 references
    0 references
    0 references
    0 references
    0 references

    Identifiers