Hidden unit specialization in layered neural networks: ReLU vs. sigmoidal activation (Q2068413): Difference between revisions

From MaRDI portal
Set OpenAlex properties.
ReferenceBot (talk | contribs)
Changed an Item
 
Property / cites work
 
Property / cites work: Statistical Mechanics of Learning / rank
 
Normal rank
Property / cites work
 
Property / cites work: The elements of statistical learning. Data mining, inference, and prediction / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q5483032 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q5270493 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Learning by on-line gradient descent / rank
 
Normal rank
Property / cites work
 
Property / cites work: Transient dynamics of on-line learning in two-layered neural networks / rank
 
Normal rank
Property / cites work
 
Property / cites work: Functional optimization of online algorithms in multilayer neural networks / rank
 
Normal rank
Property / cites work
 
Property / cites work: Phase Transitions in Machine Learning / rank
 
Normal rank
Property / cites work
 
Property / cites work: Statistical physics and representations in real and artificial neural networks / rank
 
Normal rank
Property / cites work
 
Property / cites work: Mean-field inference methods for neural networks / rank
 
Normal rank
Property / cites work
 
Property / cites work: The committee machine: computational to statistical gaps in learning a two-layers neural network / rank
 
Normal rank
Property / cites work
 
Property / cites work: Storage capacity of the fully-connected committee machine / rank
 
Normal rank
Property / cites work
 
Property / cites work: Learning from examples in fully connected committee machines / rank
 
Normal rank
Property / cites work
 
Property / cites work: Approximation by superpositions of a sigmoidal function / rank
 
Normal rank
Property / cites work
 
Property / cites work: Learning dynamics on different timescales / rank
 
Normal rank

Latest revision as of 18:15, 27 July 2024

scientific article
Language Label Description Also known as
English
Hidden unit specialization in layered neural networks: ReLU vs. sigmoidal activation
scientific article

    Statements

    Hidden unit specialization in layered neural networks: ReLU vs. sigmoidal activation (English)
    0 references
    0 references
    0 references
    0 references
    0 references
    19 January 2022
    0 references
    neural networks
    0 references
    machine learning
    0 references
    statistical physics
    0 references
    0 references
    0 references
    0 references
    0 references

    Identifiers