Adversarial-Robust Transfer Learning for Medical Imaging Via Domain Assimilation

Abstract

Extensive Research in Medical Imaging Aims to Uncover Critical Diagnostic Features in Patients, with AI-Driven Medical Diagnosis Relying on Sophisticated Machine Learning and Deep Learning Models to Analyze, Detect, and Identify Diseases from Medical Images. Despite the Remarkable Accuracy of These Models under Normal Conditions, They Grapple with Trustworthiness Issues, Where their Output Could Be Manipulated by Adversaries Who Introduce Strategic Perturbations to the Input Images. Furthermore, the Scarcity of Publicly Available Medical Images, Constituting a Bottleneck for Reliable Training, Has Led Contemporary Algorithms to Depend on Pretrained Models Grounded on a Large Set of Natural Images—a Practice Referred to as Transfer Learning. However, a Significant Domain Discrepancy Exists between Natural and Medical Images, Which Causes AI Models Resulting from Transfer Learning to Exhibit Heightened Vulnerability to Adversarial Attacks. This Paper Proposes a Domain Assimilation Approach that Introduces Texture and Color Adaptation into Transfer Learning, Followed by a Texture Preservation Component to Suppress Undesired Distortion. We Systematically Analyze the Performance of Transfer Learning in the Face of Various Adversarial Attacks under Different Data Modalities, with the overarching Goal of Fortifying the Model's Robustness and Security in Medical Imaging Tasks. the Results Demonstrate High Effectiveness in Reducing Attack Efficacy, Contributing toward More Trustworthy Transfer Learning in Biomedical Applications.

Department(s)

Computer Science

International Standard Book Number (ISBN)

978-981972240-2

International Standard Serial Number (ISSN)

1611-3349; 0302-9743

Document Type

Article - Conference proceedings

Document Version

Citation

File Type

text

Language(s)

English

Rights

© 2024 Springer, All rights reserved.

Publication Date

01 Jan 2024

Share

 
COinS