You are in:Home/Publications/Nourhan Ibrahim, Hala Badr, Eman Abdel-Ghaffar, Lamiaa Elrefaei, "Explainable Artificial Intelligence in Medical Imaging for Tumor and Alzheimer's Diagnosis: A Review", JES. Journal of Engineering Sciences,Vol 54, No. 4, 2026. doi: 10.21608/jesaun.2025.392308.1538

Prof. Lamiaa Abdallah Ahmed Elrefaei :: Publications:

Title:
Nourhan Ibrahim, Hala Badr, Eman Abdel-Ghaffar, Lamiaa Elrefaei, "Explainable Artificial Intelligence in Medical Imaging for Tumor and Alzheimer's Diagnosis: A Review", JES. Journal of Engineering Sciences,Vol 54, No. 4, 2026. doi: 10.21608/jesaun.2025.392308.1538
Authors: Nourhan Ibrahim, Hala Badr, Eman Abdel-Ghaffar, Lamiaa Elrefaei
Year: 2026
Keywords: Not Available
Journal: JES. Journal of Engineering Sciences
Volume: 54
Issue: 4
Pages: 207-233
Publisher: Assiut University, Faculty of Engineering
Local/International: Local
Paper Link:
Full paper Not Available
Supplementary materials Not Available
Abstract:

Recently, incorporating artificial intelligence (AI) into healthcare has shown considerable promise. Despite this progress, the limited interpretability of AI systems presents challenges for their implementation in clinical environments. To address the opaque nature of these so-called black-box models, researchers have introduced explainable artificial intelligence (XAI) methods. These techniques focus on increasing the transparency of AI model decision-making, thereby fostering trust among clinicians, supporting faster and more accurate diagnoses, and ensuring compliance with healthcare regulations. Deep learning (DL) approaches have shown significant success across multiple healthcare diagnostic tasks. However, their black-box nature poses a major challenge for clinical adoption, as the lack of interpretability limits trust and acceptance among healthcare professionals. Recent research in XAI aims to uncover the key features that influence model decisions, thereby improving transparency. This survey highlights the current applications of explainable DL in tumor classification and Alzheimer's disease (AD) detection. Unlike previous literature reviews that primarily emphasize general interpretability frameworks, this study focuses on the practical implementation of XAI techniques across different medical domains and modalities, providing a structured comparative analysis supported by clinical applicability. Additionally, this work presents a classification of existing XAI approaches, examines current limitations, and highlights possible directions for upcoming research within the domain.

Google ScholarAcdemia.eduResearch GateLinkedinFacebookTwitterGoogle PlusYoutubeWordpressInstagramMendeleyZoteroEvernoteORCIDScopus