TY - JOUR
T1 - ADD
T2 - 2025 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2025
AU - Zhang, Da
AU - Sun, Jiazheng
AU - Xia, Chenxiao
AU - Ma, Ruinan
AU - Zheng, Jun
N1 - Publisher Copyright:
© 2025 Institute of Electrical and Electronics Engineers Inc.. All rights reserved.
PY - 2025
Y1 - 2025
N2 - Many studies have demonstrated the vulnerability of Deep Neural Networks (DNNs) to adversarial attacks. While numerous research efforts have proposed high-performance adversarial attacks and defenses, there is a lack of research regarding the detection of defenses used by models. We have observed that different attacks have different effectiveness against the same defenses and that understanding defenses can help choose more effective attacks. In this paper, we propose a novel method for detecting defenses by generating defense detection examples encompassing gradient-based and decision-based algorithms. Defense detection examples are generated based on a specific defense. These examples do not exhibit any adversarial properties before undergoing the defense processing, but once subjected to the defense, adversarial properties become apparent. We can detect specific image-processing defenses, then select more effective attacks as well as enhance attacks against such defenses. We have conducted experiments on ImageNet, utilizing five models, and Cifar-10, utilizing six models, along with a total of nine defenses, and demonstrate the effectiveness of our method.
AB - Many studies have demonstrated the vulnerability of Deep Neural Networks (DNNs) to adversarial attacks. While numerous research efforts have proposed high-performance adversarial attacks and defenses, there is a lack of research regarding the detection of defenses used by models. We have observed that different attacks have different effectiveness against the same defenses and that understanding defenses can help choose more effective attacks. In this paper, we propose a novel method for detecting defenses by generating defense detection examples encompassing gradient-based and decision-based algorithms. Defense detection examples are generated based on a specific defense. These examples do not exhibit any adversarial properties before undergoing the defense processing, but once subjected to the defense, adversarial properties become apparent. We can detect specific image-processing defenses, then select more effective attacks as well as enhance attacks against such defenses. We have conducted experiments on ImageNet, utilizing five models, and Cifar-10, utilizing six models, along with a total of nine defenses, and demonstrate the effectiveness of our method.
KW - adversarial attacks
KW - adversarial defenses
KW - Deep neural networks
KW - defense detection
UR - http://www.scopus.com/pages/publications/105009599698
U2 - 10.1109/ICASSP49660.2025.10889555
DO - 10.1109/ICASSP49660.2025.10889555
M3 - Conference article
AN - SCOPUS:105009599698
SN - 0736-7791
JO - Proceedings - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing
JF - Proceedings - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing
Y2 - 6 April 2025 through 11 April 2025
ER -