The Fast Minimum Norm Attack (FMN), from Fast Minimum-norm Adversarial Attacks through Adaptive Norm Constraints.
🎯 Accepted at NeurIPS 2021! Paper available at this link.
🎉 Now available also in Foolbox, SecML and Adversarial Library.
🎮 For a quick demo example, check out this notebook.
📝 For a more complete example, with different datasets and robust models, check out the full example notebook.
Here is a conceptual figure of the attack. In summary, the algorithm performs normalized gradient descent and projects into an epsilon Lp-ball which is adapted to find the minimum norm adversarials.
GIF created with SecML library.
from src.attacks.fmn import L1FMNAttack
import foolbox as fb
model = ... # pytorch model
fb_model = fb.models.PyTorchModel(model)
attack = L1FMNAttack()
advs, _, is_adv = attack(fb_model, images, criterion, epsilons=None)
import foolbox as fb
model = ... # pytorch model
fb_model = fb.models.PyTorchModel(model)
attack = fb.attacks.L1FMNAttack()
advs, _, is_adv = attack(fb_model, samples, labels, epsilons=None)
import foolbox as fb
from secml.adv.attacks.evasion import CAttackEvasionFoolbox
model = ... # pytorch model
secml_model = CClassifierPyTorch(model=model, pretrained=True, ...) # wraps pytorch model in Secml
attack = CAttackEvasionFoolbox(secml_model, y_target=None, fb_attack_class=fb.attacks.L1FMNAttack)
y_pred, _, adv_ds, _ = attack.run(samples, labels)
from adv_lib.attacks import fmn
model = ... # pytorch model
norm = 1 # will use L1 norm
results = fmn(model, inputs, labels, norm)
These are results against a MNIST 9-layer ConvNet. Check out the notebooks for more examples.
If you use FMN in your work, please cite us using the following BibTeX entry:
@article{pintor2021fast,
title={Fast minimum-norm adversarial attacks through adaptive norm constraints},
author={Pintor, Maura and Roli, Fabio and Brendel, Wieland and Biggio, Battista},
journal={Advances in Neural Information Processing Systems},
volume={34},
year={2021}
}