Adversarial Robustness
This repository contains the code needed to evaluate models trained in Doing More with Less: Improving Robustness using Generated Data which has been accepted at ICLR 2021 Security and Safety in Machine Learning Systems Workshop.
Contents
We have released our top-performing models in two formats compatible with JAX and PyTorch. This repository also contains our model definitions.
Running the example code
Downloading a model
Download a model from links listed in the following table. Clean and robust accuracies are measured on the full test set. The robust accuracy is measured using AutoAttack.
| dataset | norm | radius | architecture | extra data | clean | robust | link |
|---|---|---|---|---|---|---|---|
| CIFAR-10 | ℓ∞ | 8 / 255 | WRN-70-16 | ✗ | 86.94% | 63.62% | jax, pt |
| CIFAR-10 | ℓ∞ | 8 / 255 | WRN-28-10 | ✗ | 85.97% | 60.73% | jax, pt |
| CIFAR-10 | ℓ2 | 8 / 255 | WRN-70-16 | ✗ | 90.83% | 78.39% | jax, pt |
| CIFAR-10 | ℓ2 | 8 / 255 | WRN-28-10 | ✗ | 90.24% | 77.44% | jax, pt |
| CIFAR-100 | ℓ∞ | 8 / 255 | WRN-70-16 | ✗ | 60.46% | 33.49% | jax, pt |
| CIFAR-100 | ℓ∞ | 8 / 255 | WRN-28-10 | ✗ | 59.18% | 30.81% | jax, pt |
Using the model
Once downloaded, a model can be evaluated (clean accuracy) by running the
eval.py script in either the jax or pytorch folders. E.g.:
cd jax
python3 eval.py \
--ckpt=${PATH_TO_CHECKPOINT} --depth=70 --width=16 --dataset=cifar10
Citing this work
If you use this code or these models in your work, please cite the complete version which combines data augmentation with generated samples:
@article{rebuffi2021fixing,
title={Fixing Data Augmentation to Improve Adversarial Robustness},
author={Rebuffi, Sylvestre-Alvise and Gowal, Sven and Calian, Dan A. and Stimberg, Florian and Wiles, Olivia and Mann, Timothy},
journal={arXiv preprint arXiv:2103.01946},
year={2021},
url={https://arxiv.org/pdf/2103.01946}
}
Disclaimer
This is not an official Google product.