XDR-LVLM: An Explainable Vision-Language Large Model for Diabetic Retinopathy Diagnosis

AI in healthcare
Published: arXiv: 2508.15168v1
Authors

Masato Ito Kaito Tanaka Keisuke Matsuda Aya Nakayama

Abstract

Diabetic Retinopathy (DR) is a major cause of global blindness, necessitating early and accurate diagnosis. While deep learning models have shown promise in DR detection, their black-box nature often hinders clinical adoption due to a lack of transparency and interpretability. To address this, we propose XDR-LVLM (eXplainable Diabetic Retinopathy Diagnosis with LVLM), a novel framework that leverages Vision-Language Large Models (LVLMs) for high-precision DR diagnosis coupled with natural language-based explanations. XDR-LVLM integrates a specialized Medical Vision Encoder, an LVLM Core, and employs Multi-task Prompt Engineering and Multi-stage Fine-tuning to deeply understand pathological features within fundus images and generate comprehensive diagnostic reports. These reports explicitly include DR severity grading, identification of key pathological concepts (e.g., hemorrhages, exudates, microaneurysms), and detailed explanations linking observed features to the diagnosis. Extensive experiments on the Diabetic Retinopathy (DDR) dataset demonstrate that XDR-LVLM achieves state-of-the-art performance, with a Balanced Accuracy of 84.55% and an F1 Score of 79.92% for disease diagnosis, and superior results for concept detection (77.95% BACC, 66.88% F1). Furthermore, human evaluations confirm the high fluency, accuracy, and clinical utility of the generated explanations, showcasing XDR-LVLM's ability to bridge the gap between automated diagnosis and clinical needs by providing robust and interpretable insights.

Paper Summary

Problem
Diabetic Retinopathy (DR) is a major cause of global blindness, requiring early and accurate diagnosis. However, traditional methods of diagnosis by experienced ophthalmologists face challenges such as the scarcity of medical professionals, subjective interpretation, and limitations in diagnostic efficiency. Deep learning models have shown promise in DR detection, but their black-box nature hinders clinical adoption due to a lack of transparency and interpretability.
Key Innovation
The researchers propose XDR-LVLM, a novel framework that leverages Vision-Language Large Models (LVLMs) for high-precision DR diagnosis coupled with natural language-based explanations. XDR-LVLM integrates a Medical Vision Encoder, an LVLM Core, and employs Multi-task Prompt Engineering and Multi-stage Fine-tuning to deeply understand pathological features within fundus images and generate comprehensive diagnostic reports.
Practical Impact
XDR-LVLM has the potential to revolutionize the diagnosis of Diabetic Retinopathy by providing accurate and interpretable results. Clinicians can understand the model's reasoning, assess its reliability, and use it as a robust decision-support tool. This can lead to better patient outcomes, improved clinical efficiency, and reduced costs associated with unnecessary treatments.
Analogy / Intuitive Explanation
Imagine a doctor looking at a patient's retina and explaining the diagnosis in simple terms, pointing out specific features such as hemorrhages, exudates, and microaneurysms. XDR-LVLM works similarly, using a combination of visual and language understanding to generate detailed reports that explain the diagnosis and provide a clear rationale for the decision. This approach makes it easier for clinicians to trust the model's results and use them to inform their decisions.
Paper Information
Categories:
cs.CV
Published Date:

arXiv ID:

2508.15168v1

Quick Actions