Skip to content

Commit

Permalink
Notes for the layers selected
Browse files Browse the repository at this point in the history
  • Loading branch information
spencerwooo committed Nov 29, 2024
1 parent 9f26aa3 commit b7f160d
Show file tree
Hide file tree
Showing 2 changed files with 8 additions and 7 deletions.
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -100,6 +100,7 @@ Gradient-based attacks:
| TI-FGSM | $\ell_\infty$ | CVPR 2019 | [Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks](https://arxiv.org/abs/1904.02884) | `TIFGSM` |
| NI-FGSM | $\ell_\infty$ | ICLR 2020 | [Nesterov Accelerated Gradient and Scale Invariance for Adversarial Attacks](https://arxiv.org/abs/1908.06281) | `NIFGSM` |
| SI-NI-FGSM | $\ell_\infty$ | ICLR 2020 | [Nesterov Accelerated Gradient and Scale Invariance for Adversarial Attacks](https://arxiv.org/abs/1908.06281) | `SINIFGSM` |
| DR | $\ell_\infty$ | CVPR 2020 | [Enhancing Cross-Task Black-Box Transferability of Adversarial Examples With Dispersion Reduction](https://arxiv.org/abs/1911.11616) | `DR` |
| VMI-FGSM | $\ell_\infty$ | CVPR 2021 | [Enhancing the Transferability of Adversarial Attacks through Variance Tuning](https://arxiv.org/abs/2103.15571) | `VMIFGSM` |
| VNI-FGSM | $\ell_\infty$ | CVPR 2021 | [Enhancing the Transferability of Adversarial Attacks through Variance Tuning](https://arxiv.org/abs/2103.15571) | `VNIFGSM` |
| Admix | $\ell_\infty$ | ICCV 2021 | [Admix: Enhancing the Transferability of Adversarial Attacks](https://arxiv.org/abs/2102.00436) | `Admix` |
Expand Down
14 changes: 7 additions & 7 deletions torchattack/dr.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ class DR(Attack):
device: Device to use for tensors. Defaults to cuda if available.
model_name: The name of the model to attack. Defaults to ''.
eps: The maximum perturbation. Defaults to 8/255.
steps: Number of steps. Defaults to 10.
steps: Number of steps. Defaults to 100.
alpha: Step size, `eps / steps` if None. Defaults to None.
decay: Decay factor for the momentum term. Defaults to 1.0.
feature_layer: The layer of the model to extract features from and apply
Expand All @@ -33,9 +33,9 @@ class DR(Attack):
# Specified in _builtin_models assume models that are loaded from,
# or share the exact structure as, torchvision model variants.
_builtin_models = {
'vgg16': 'features.14',
'resnet152': 'layer2.7.conv3',
'inception_v3': 'Mixed_5b',
'vgg16': 'features.14', # conv3-3 for VGG-16
'resnet152': 'layer2.7.conv3', # conv3-8-3 for ResNet-152
'inception_v3': 'Mixed_5b', # Mixed_5b (Group A) for Inception-v3
}

def __init__(
Expand All @@ -45,7 +45,7 @@ def __init__(
device: torch.device | None = None,
model_name: str = '',
eps: float = 8 / 255,
steps: int = 10,
steps: int = 100,
alpha: float | None = None,
decay: float = 1.0,
feature_layer: str = '',
Expand All @@ -56,7 +56,7 @@ def __init__(

# If model is initialized via `torchattack.AttackModel`, infer its model_name
# from automatically attached attribute during instantiation.
if not model_name:
if not model_name and hasattr(model, 'model_name'):
model_name = model.model_name

self.eps = eps
Expand Down Expand Up @@ -137,4 +137,4 @@ def forward(self, x: torch.Tensor, y: torch.Tensor) -> torch.Tensor:
if __name__ == '__main__':
from torchattack.eval.runner import run_attack

run_attack(DR, model_name='inception_v3', victim_model_names=['resnet18'])
run_attack(DR, model_name='vgg16', victim_model_names=['resnet18'])

0 comments on commit b7f160d

Please sign in to comment.