diff --git a/adversarial_robustness/README.md b/adversarial_robustness/README.md index e6ba2f1..5c62cb5 100644 --- a/adversarial_robustness/README.md +++ b/adversarial_robustness/README.md @@ -58,11 +58,32 @@ python3 eval.py \ --ckpt=${PATH_TO_CHECKPOINT} --depth=70 --width=16 --dataset=cifar10 ``` +## Generated datasets + +Rebuffi et al. (2021) use samples generated by a Denoising Diffusion +Probabilistic Model [(DDPM; Ho et al., 2020)](https://arxiv.org/abs/2006.11239) +to improve robustness. The DDPM is solely trained on the original training data +and does not use additional external data. The following table links to datasets +of 1M **generated** samples for CIFAR-10, CIFAR-100 and SVHN. + +| dataset | model | size | link | +|---|---|:---:|:---:| +| CIFAR-10 | DDPM | 1M | [npz](https://storage.googleapis.com/dm-adversarial-robustness/cifar10_ddpm.npz) | +| CIFAR-100 | DDPM | 1M | [npz](https://storage.googleapis.com/dm-adversarial-robustness/cifar100_ddpm.npz) | +| SVHN | DDPM | 1M | [npz](https://storage.googleapis.com/dm-adversarial-robustness/svhn_ddpm.npz) | + +To load each dataset, use NumPy. E.g.: + +``` +npzfile = np.load('cifar10_ddpm.npz') +images = npzfile['image'] +labels = npzfile['label'] +``` ## Citing this work -If you use this code or these models in your work, please cite the relevant -accompanying paper: +If you use this code, data or these models in your work, please cite the +relevant accompanying paper: ``` @article{gowal2020uncovering, diff --git a/adversarial_robustness/iclrw2021doing/README.md b/adversarial_robustness/iclrw2021doing/README.md index d6cf694..e3b83aa 100644 --- a/adversarial_robustness/iclrw2021doing/README.md +++ b/adversarial_robustness/iclrw2021doing/README.md @@ -41,6 +41,27 @@ python3 eval.py \ --ckpt=${PATH_TO_CHECKPOINT} --depth=70 --width=16 --dataset=cifar10 ``` +## Generated datasets + +This work uses samples generated by a Denoising Diffusion +Probabilistic Model [(DDPM; Ho et al., 2020)](https://arxiv.org/abs/2006.11239) +to improve robustness. The DDPM is solely trained on the original training data +and does not use additional external data. The following table links to datasets +of 1M **generated** samples for CIFAR-10, CIFAR-100 and SVHN. + +| dataset | model | size | link | +|---|---|:---:|:---:| +| CIFAR-10 | DDPM | 1M | [npz](https://storage.googleapis.com/dm-adversarial-robustness/cifar10_ddpm.npz) | +| CIFAR-100 | DDPM | 1M | [npz](https://storage.googleapis.com/dm-adversarial-robustness/cifar100_ddpm.npz) | +| SVHN | DDPM | 1M | [npz](https://storage.googleapis.com/dm-adversarial-robustness/svhn_ddpm.npz) | + +To load each dataset, use NumPy. E.g.: + +``` +npzfile = np.load('cifar10_ddpm.npz') +images = npzfile['image'] +labels = npzfile['label'] +``` ## Citing this work diff --git a/adversarial_robustness/run.sh b/adversarial_robustness/run.sh index 0b64b41..6f11347 100755 --- a/adversarial_robustness/run.sh +++ b/adversarial_robustness/run.sh @@ -21,12 +21,16 @@ pip install -r adversarial_robustness/requirements.txt python3 -m adversarial_robustness.jax.eval \ --ckpt=dummy \ --dataset=cifar10 \ + --width=1 \ + --depth=10 \ --batch_size=1 \ --num_batches=1 python3 -m adversarial_robustness.pytorch.eval \ --ckpt=dummy \ --dataset=cifar10 \ + --width=1 \ + --depth=10 \ --batch_size=1 \ --num_batches=1 \ --nouse_cuda