What is FGSM?
fast gradient sign method
What is a downside of using gradient descent for attacking ?
expensive to find adversarial example x+d
many forward and backward passes needed
no direct control over size of pertubation d
What are the basic assumptions and goals of FGSM?
assumption:
we have linear model g: R^n -> R^n
g(x) = wTx
goal:
construct x+d so that
||d||_inf <= epsilon
limit the distortion
|g(x+d) -g(x)| maximized
-> maximize impact of distortion / adversarial example
How can one (mathematically) maximize |g(x+d) -g(x)|?
write out
kürze wTx
-> can be maximized by using maximum possible d (-> d = epsilon)
and assigning d the same sign as w (so that all terms in the sum are positive…)
=> di = epsilon*sign(wi)
What is the actual maximum for the linear maximization approach
?
-> n times average weight magnitued times epsilon
What is an effect of the distorion being dependable on n?
the distortion increases linearily with n (size of input to layer)
but epsilon does not change with it…
=> the more input features the NN has
-> the more effective the attack
i.e. the less perceptible delta, as smaller is actually required…
How can we map the linear FGSM attack method to arbitrary ( non-linear) networks?
What are the advantages of FGSM? For what attacks?
find optimal L_infinity pertubation d for bening sample (x,y)
in single step
for untargeted attacks
Why does FGSM work on non-linear models?
single step assumes linear behavior of model and loss funciton
-> activatin functions are locally linear in epsilon environment
What is adversarial retraining?
a defense approach against adversarial attacks
idea:
extend the loss to incorporate untargeted attacks
-> reduce loss to correctly classify even when we have adversarial distortion
-> here, we weight both with the same (1/2)
What are the effects of adversarial retraining?
improves performance on clean test data
improves robustness on adversarial test data
weights of adversarial trained model more localized
What is iterative FGSM?
apply FGSM T times, each time with small step size
after each step, clip adversarial pertubation to epsilon ball (L_infinity distance) around x
where x0 = x
What is iterative FGSM useful for?
useful if FGSM oversteps
i.e. if local linearity does not hold…
W.r.t. what is FGSM optimal?
w.r.t L_infinity loss
Last changed2 years ago