Authors:
Changwei Liu
;
Louis DiValentin
;
Aolin Ding
and
Malek Ben Salem
Affiliation:
Accenture Cyber Labs, 1201 Wilson Blvd, Arlington, VA, U.S.A.
Keyword(s):
Adversarial Example Attack, Input Transformation Ensembles, Adversarial Example Defense.
Abstract:
Input transformation techniques have been proposed to defend against adversarial example attacks in imageclassification systems. However, recent works have shown that, although input transformations and augmentations to adversarial samples can prevent unsophisticated adversarial example attacks, adaptive attackers can modify their optimization functions to subvert these defenses. Previous research, especially BaRT (Raff et al., 2019), has suggested building a strong defense by stochastically combining a large number of even individually weak defenses into a single barrage of randomized transformations, which subsequently increases the cost of searching the input space to levels that are not easily computationally feasible for adaptive attacks. While this research took approaches to randomly select input transformations that have different transformation effects to form a strong defense, a thorough evaluation of using well-known state-of-the-art attacks with extensive combinations has
not been performed. Therefore, it is still unclear whether employing a large barrage of randomly combined input transformations ensures a robust defense. To answer these questions, we evaluated BaRT work by using a large number (33) of input transformation techniques. Contrary to BaRT’s recommendation of using five randomly combined input transformations, our findings indicate that this approach does not consistently provide robust defense against strong attacks like the PGD attack. As an improvement, we identify different combinations that only use three strong input transformations but can still provide a resilient defense.
(More)