Alternative Title
Eye-gazed Guided Multi-modal Alignment Framework for Radiology
Abstract
In the medical multi-modal frameworks, the alignment of cross-modality features presents a significant challenge. However, existing works have learned features that are implicitly aligned from the data, without considering the explicit relationships in the medical context. This data-reliance may lead to low generalization of the learned alignment relationships. In this work, we propose the Eye-gaze Guided Multi-modal Alignment (EGMA) framework to harness eye-gaze data for better alignment of medical visual and textual features. We explore the natural auxiliary role of radiologists' eye-gaze data in aligning medical images and text and introduce a novel approach by using eye-gaze data, collected synchronously by radiologists during diagnostic evaluations. We conduct downstream tasks of image classification and image-text retrieval on four medical datasets, where EGMA achieved state-of-the-art performance and stronger generalization across different datasets. Additionally, we explore the impact of varying amounts of eye-gaze data on model performance, highlighting the feasibility and utility of integrating this auxiliary data into multi-modal alignment framework.
Recommended Citation
C. Ma and H. Jiang and W. Chen and Y. Li and Z. Wu and X. Yu and Z. Liu and L. Guo and D. Zhu and T. Zhang and D. Shen and T. Liu and X. Li, "Eye-gaze Guided Multi-modal Alignment For Medical Representation Learning," Advances in Neural Information Processing Systems, vol. 37, arXiv, Jan 2024.
The definitive version is available at https://doi.org/10.48550/arXiv.2403.12416
Department(s)
Computer Science
Publication Status
Open Access
International Standard Serial Number (ISSN)
1049-5258
Document Type
Article - Conference proceedings
Document Version
Citation
File Type
text
Language(s)
English
Rights
© 2025 arXiv, All rights reserved.
Publication Date
01 Jan 2024
