You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First of all, I would like to thank you for this incredible work!
I suppose that the gradient of loss should be calculated w.r.t the input image instead of the perturbations (e.g. delta in the following codes) in each iteration of PGD attack. May I know why the gradient of loss is calculated w.r.t the perturbations (e.g. delta.grad.data.sign()) in each iteration?
Thanks.
if delta_init is not None:
delta = delta_init
else:
delta = torch.zeros_like(xvar)
delta.requires_grad_()
for ii in range(nb_iter):
outputs = predict(xvar + delta)
loss = loss_fn(outputs, yvar)
if minimize:
loss = -loss
loss.backward()
if ord == np.inf:
grad_sign = delta.grad.data.sign()
delta.data = delta.data + batch_multiply(eps_iter, grad_sign)
delta.data = batch_clamp(eps, delta.data)
delta.data = clamp(xvar.data + delta.data, clip_min, clip_max
) - xvar.data
The text was updated successfully, but these errors were encountered:
First of all, I would like to thank you for this incredible work!
I suppose that the gradient of loss should be calculated w.r.t the input image instead of the perturbations (e.g. delta in the following codes) in each iteration of PGD attack. May I know why the gradient of loss is calculated w.r.t the perturbations (e.g. delta.grad.data.sign()) in each iteration?
Thanks.
The text was updated successfully, but these errors were encountered: