Double Sampling Randomized Smoothing

From MaRDI portal
Publication:6402237

arXiv2206.07912MaRDI QIDQ6402237FDOQ6402237


Authors: Linyi Li, Jiawei Zhang, Tao Xie, Bo Li Edit this on Wikidata


Publication date: 16 June 2022

Abstract: Neural networks (NNs) are known to be vulnerable against adversarial perturbations, and thus there is a line of work aiming to provide robustness certification for NNs, such as randomized smoothing, which samples smoothing noises from a certain distribution to certify the robustness for a smoothed classifier. However, as shown by previous work, the certified robust radius in randomized smoothing suffers from scaling to large datasets ("curse of dimensionality"). To overcome this hurdle, we propose a Double Sampling Randomized Smoothing (DSRS) framework, which exploits the sampled probability from an additional smoothing distribution to tighten the robustness certification of the previous smoothed classifier. Theoretically, under mild assumptions, we prove that DSRS can certify Theta(sqrtd) robust radius under ell2 norm where d is the input dimension, implying that DSRS may be able to break the curse of dimensionality of randomized smoothing. We instantiate DSRS for a generalized family of Gaussian smoothing and propose an efficient and sound computing method based on customized dual optimization considering sampling error. Extensive experiments on MNIST, CIFAR-10, and ImageNet verify our theory and show that DSRS certifies larger robust radii than existing baselines consistently under different settings. Code is available at https://github.com/llylly/DSRS.




Has companion code repository: https://github.com/dsgiitr/re_dsrs









This page was built for publication: Double Sampling Randomized Smoothing

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6402237)