ADD: A Detection Method for Image-Processing Adversarial Defenses

Da Zhang, Jiazheng Sun, Chenxiao Xia, Ruinan Ma, Jun Zheng*

*此作品的通讯作者

科研成果: 期刊稿件会议文章同行评审

摘要

Many studies have demonstrated the vulnerability of Deep Neural Networks (DNNs) to adversarial attacks. While numerous research efforts have proposed high-performance adversarial attacks and defenses, there is a lack of research regarding the detection of defenses used by models. We have observed that different attacks have different effectiveness against the same defenses and that understanding defenses can help choose more effective attacks. In this paper, we propose a novel method for detecting defenses by generating defense detection examples encompassing gradient-based and decision-based algorithms. Defense detection examples are generated based on a specific defense. These examples do not exhibit any adversarial properties before undergoing the defense processing, but once subjected to the defense, adversarial properties become apparent. We can detect specific image-processing defenses, then select more effective attacks as well as enhance attacks against such defenses. We have conducted experiments on ImageNet, utilizing five models, and Cifar-10, utilizing six models, along with a total of nine defenses, and demonstrate the effectiveness of our method.

指纹

探究 'ADD: A Detection Method for Image-Processing Adversarial Defenses' 的科研主题。它们共同构成独一无二的指纹。

引用此