Abstract
As Artificial Intelligence (AI) systems increasingly underpin critical applications, from autonomous vehicles to biometric authentication, their vulnerability to transferable attacks presents a growing concern. These attacks, designed to generalize across instances, domains, models, tasks, modalities, or even hardware platforms, pose severe risks to security, privacy, and system integrity. This survey delivers the first comprehensive review of transferable attacks across seven major categories, including evasion, backdoor, data poisoning, model stealing, model inversion, membership inference, and side-channel attacks. We introduce a unified six-dimensional taxonomy: cross-instance, cross-domain, cross-modality, cross-model, cross-task, and cross-hardware, which systematically captures the diverse transfer pathways of adversarial strategies. Through this framework, we examine both the underlying mechanics and practical implications of transferable attacks on AI systems. Furthermore, we review cutting-edge methods for enhancing attack transferability, organized around data augmentation and optimization strategies. By consolidating fragmented research and identifying critical future directions, this work provides a foundational roadmap for understanding, evaluating, and defending against transferable threats in real-world AI systems.
Recommended Citation
G. Wang et al., "Beyond Boundaries: A Comprehensive Survey of Transferable Attacks on AI Systems,", May 2025.
Department(s)
Computer Science
Keywords and Phrases
Transferability, Adversarial Attack, Evasion, Backdoor, Data Poisoning, Model Inversion, Model Stealing, Membership Inference, Side-Channel Attack
Document Type
Article - Journal
Document Version
Citation
File Type
text
Language(s)
English
Publication Date
12 May, 2025
