Unmasking Dementia Detection by Masking Input Gradients: A JSM Approach to Model Interpretability and Precision


The Evolution of Deep Learning and Artificial Intelligence Has Significantly Reshaped Technological Landscapes. However, their Effective Application in Crucial Sectors Such as Medicine Demands More Than Just Superior Performance, But Trustworthiness as Well. While Interpretability Plays a Pivotal Role, Existing Explainable AI (XAI) Approaches Often Do Not Reveal Clever Hans Behavior Where a Model Makes (Ungeneralizable) Correct Predictions using Spurious Correlations or Biases in Data. Likewise, Current Post-Hoc XAI Methods Are Susceptible to Generating Unjustified Counterfactual Examples. in This Paper, We Approach XAI with an Innovative Model Debugging Methodology Realized through Jacobian Saliency Map (JSM). to Cast the Problem into a Concrete Context, We Employ Alzheimer's Disease (AD) Diagnosis as the Use Case, Motivated by its Significant Impact on Human Lives and the Formidable Challenge in its Early Detection, Stemming from the Intricate Nature of its Progression. We Introduce an Interpretable, Multimodal Model for AD Classification over its Multi-Stage Progression, Incorporating JSM as a Modality-Agnostic Tool that Provides Insights into Volumetric Changes Indicative of Brain Abnormalities. Our Extensive Evaluation Including Ablation Study Manifests the Efficacy of using JSM for Model Debugging and Interpretation, While Significantly Enhancing Model Accuracy as Well.


Computer Science

International Standard Book Number (ISBN)


International Standard Serial Number (ISSN)

1611-3349; 0302-9743

Document Type

Article - Conference proceedings

Document Version


File Type





© 2024 Springer, All rights reserved.

Publication Date

01 Jan 2024