Theoretical investigation of generalization bounds for adversarial learning of deep neural networks (Q2241474)
From MaRDI portal
scientific article
Language | Label | Description | Also known as |
---|---|---|---|
English | Theoretical investigation of generalization bounds for adversarial learning of deep neural networks |
scientific article |
Statements
Theoretical investigation of generalization bounds for adversarial learning of deep neural networks (English)
0 references
9 November 2021
0 references
The authors investigate a generalized behavior of adversarial learning through Rademacher complexity. The paper established three results: \begin{itemize} \item[1.] A tighter upper bound on the Rademacher complexity for the class of functions representable as DNN with spectral normalization [\textit{T. Miyato} et al., ``Spectral normalization for generative adversarial networks'', Preprint, \url{arXiv:1802.05957}] and low-rank weight matrices under the Fast Gradient Sign Method (FGSM), which is a commonly used adversarial training method. This means, theoretically, adversarial learning through FGSM is easier than previously proposed in the literature. \item[2.] The authors also prove that adversarial training is never easier than natural training by showing that the Rademacher complexity for adversarial learning is greater than its natural learning counterpart. \item[3.] The authors conducted experiments on synthetic dataset to verify the theoretical findings. In particular, they demonstrated that the Rademacher complexity of adversarial learning is independent of the depth of the network if the network has a low-rank weight matrix. \end{itemize}
0 references
adversarial learning
0 references
deep neural networks
0 references
generalization bounds
0 references
Lipschitz continuity
0 references
natural learning
0 references
Rademacher complexity
0 references
0 references