Theoretical investigation of generalization bounds for adversarial learning of deep neural networks (Q2241474): Difference between revisions

From MaRDI portal
Set OpenAlex properties.
ReferenceBot (talk | contribs)
Changed an Item
 
Property / cites work
 
Property / cites work: Tight Oracle Inequalities for Low-Rank Matrix Recovery From a Minimal Number of Noisy Random Measurements / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q5270493 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q2934059 / rank
 
Normal rank
Property / cites work
 
Property / cites work: Probability in Banach spaces. Isoperimetry and processes / rank
 
Normal rank
Property / cites work
 
Property / cites work: Q3093199 / rank
 
Normal rank
Property / cites work
 
Property / cites work: On early stopping in gradient descent learning / rank
 
Normal rank

Latest revision as of 01:48, 27 July 2024

scientific article
Language Label Description Also known as
English
Theoretical investigation of generalization bounds for adversarial learning of deep neural networks
scientific article

    Statements

    Theoretical investigation of generalization bounds for adversarial learning of deep neural networks (English)
    0 references
    0 references
    9 November 2021
    0 references
    The authors investigate a generalized behavior of adversarial learning through Rademacher complexity. The paper established three results: \begin{itemize} \item[1.] A tighter upper bound on the Rademacher complexity for the class of functions representable as DNN with spectral normalization [\textit{T. Miyato} et al., ``Spectral normalization for generative adversarial networks'', Preprint, \url{arXiv:1802.05957}] and low-rank weight matrices under the Fast Gradient Sign Method (FGSM), which is a commonly used adversarial training method. This means, theoretically, adversarial learning through FGSM is easier than previously proposed in the literature. \item[2.] The authors also prove that adversarial training is never easier than natural training by showing that the Rademacher complexity for adversarial learning is greater than its natural learning counterpart. \item[3.] The authors conducted experiments on synthetic dataset to verify the theoretical findings. In particular, they demonstrated that the Rademacher complexity of adversarial learning is independent of the depth of the network if the network has a low-rank weight matrix. \end{itemize}
    0 references
    0 references
    adversarial learning
    0 references
    deep neural networks
    0 references
    generalization bounds
    0 references
    Lipschitz continuity
    0 references
    natural learning
    0 references
    Rademacher complexity
    0 references

    Identifiers