Doctoral Dissertations

Abstract

Despite its remarkable achievements across a multitude of benchmark tasks, deep learning (DL) models exhibit significant fragility to adversarial examples, i.e., subtle modifications applied to inputs during testing yet effective in misleading DL models. These meticulously crafted perturbations possess the remarkable property of transferability: an adversarial example that effectively fools one model often retains its effectiveness against another model, even if the two models were trained independently. This research delves into the characteristics influencing the transferability of adversarial examples from three distinct and complementary perspectives: data, model, and optimization. Firstly, from the data perspective, we propose a new method of crafting transferable AE based on random erasure (RE) which erase part of image with random noise which increases the diversity of adversarial perturbations and helps stabilize gradient fluctuations. Secondly, we explore from optimization perspective by penalizing the input gradient norm when optimizing the objective for generating AE, aim to find AE within flat regions of the loss landscape. Thirdly, we investigate from the model perspective and propose a novel strategy centered on transforming surrogate models by Lipschitz regularization. Finally, we introduce the normalized Hessian trace, a metric capable of accurately and consistently characterizing the curvature of loss landscapes, based on which we propose CR-SAM, a novel optimization technique that integrates curvature regularization into the Sharpness-Aware Minimization (SAM) optimizer aims to bolster the generalizability of deep neural networks across a range of image classification tasks. In summary, this research presents three complementary techniques that provide a comprehensive and practical approach to generating highly transferable adversarial examples. Furthermore, our exploration of metrics aimed at describing the curvature of the loss landscape contributes to a deeper understanding of the optimization process and facilitates the enhancement of deep learning models' generalizability.

Advisor(s)

Wunsch, Donald C.
Luo, Tony T.

Committee Member(s)

Yang, Huiyuan
Hu, Wenqing
Nadendla, V. Sriram Siddhardh

Department(s)

Electrical and Computer Engineering

Second Department

Computer Science

Degree Name

Ph. D. in Computer Science

Publisher

Missouri University of Science and Technology

Publication Date

Summer 2024

Pagination

xi, 87 pages

Note about bibliography

Includes_bibliographical_references_(pages 73-85)

Rights

©2024 Tao Wu , All Rights Reserved

Document Type

Dissertation - Open Access

File Type

text

Language

English

Thesis Number

T 12447

Electronic OCLC #

1478162328

Share

 
COinS