Abstract

The Transferability of Adversarial Examples is of Central Importance to Transfer-Based Black-Box Adversarial Attacks. Previous Works for Generating Transferable Adversarial Examples Focus on Attacking Given Pretrained Surrogate Models While the Connections between Surrogate Models and Adversarial Trasferability Have Been overlooked. in This Paper, We Propose Lipschitz Regularized Surrogate (LRS) for Transfer-Based Black-Box Attacks, a Novel Approach that Transforms Surrogate Models towards Favorable Adversarial Transferability. using Such Transformed Surrogate Models, Any Existing Transfer-Based Black-Box Attack Can Run Without Any Change, Yet Achieving Much Better Performance. Specifically, We Impose Lipschitz Regularization on the Loss Landscape of Surrogate Models to Enable a Smoother and More Controlled Optimization Process for Generating More Transferable Adversarial Examples. in Addition, This Paper Also Sheds Light on the Connection between the Inner Properties of Surrogate Models and Adversarial Transferability, Where Three Factors Are Identified: Smaller Local Lipschitz Constant, Smoother Loss Landscape, and Stronger Adversarial Robustness. We Evaluate Our Proposed LRS Approach by Attacking State-Of-The-Art Standard Deep Neural Networks and Defense Models. the Results Demonstrate Significant Improvement on the Attack Success Rates and Transferability. Our Code is Available at Https://github.com/TrustAIoT/LRS.

Department(s)

Computer Science

Second Department

Electrical and Computer Engineering

Comments

National Sleep Foundation, Grant 2008878

International Standard Serial Number (ISSN)

2374-3468; 2159-5399

Document Type

Article - Conference proceedings

Document Version

Citation

File Type

text

Language(s)

English

Rights

© 2024 Association for the Advancement of Artificial Intelligence, All rights reserved.

Publication Date

25 Mar 2024

Share

 
COinS