You are in:Home/Publications/Q. Abbas, Mostafa E.A. Ibrahim, and Abdul Rauf Baig. Transfer Learning-based Computer-aided Diagnosis System for Predicting Grades of Diabetic Retinopathy. Computers, Materials & Continua, 71(2), (2022).

Dr. Mostafa Elsayed Ahmed Ibrahim :: Publications:

Title:
Q. Abbas, Mostafa E.A. Ibrahim, and Abdul Rauf Baig. Transfer Learning-based Computer-aided Diagnosis System for Predicting Grades of Diabetic Retinopathy. Computers, Materials & Continua, 71(2), (2022).
Authors: Q. Abbas, Mostafa E.A. Ibrahim, and Abdul Rauf Baig
Year: 2022
Keywords: Diabetic Retinopathy; retinal fundus images; computer-aided diagnosis system; deep learning; transfer learning; convolutional neural network
Journal: Computers,Materials & Continua
Volume: 71
Issue: 2
Pages: Not Available
Publisher: Tech Science Pres
Local/International: International
Paper Link:
Full paper Mostafa Elsayed Ahmed Ibrahim_Transfer Learning-based Computer-aided Diagnosis System for Predicting Grades of Diabetic Retinopathy.pdf
Supplementary materials Not Available
Abstract:

Diabetic retinopathy (DR) diagnosis through digital fundus images requires clinical experts to recognize the presence and importance of many intricate features. This task is very difficult for ophthalmologists and time consuming. Therefore, many computer-aided diagnosis (CAD) systems were developed to automate this screening process of DR. In this paper, a CAD-DR system is proposed based on preprocessing and a pre-train transfer learning based convolutional neural network (PCNN) to recognize the five stages of DR through retinal fundus images. To develop this CAD-DR system, a preprocessing step is performed in a perceptual-oriented color space to enhance the DR-related lesions and then a standard pre-train PCNN model is improved to get high classification results. The architecture of the PCNN model is based on three main phases. Firstly, the training process of the proposed PCNN is accomplished by using the expected gradient length (EGL) to decrease the image labeling efforts during the training of the CNN model. Secondly, themost informative patches and images were automatically selected using a few pieces of training labeled samples. Thirdly, the PCNN method generated useful masks for prognostication and identified regions of interest. Fourthly, the DR-related lesions involved in the classification task such as micro-aneurysms, hemorrhages, and exudates were detected and then used for recognition of DR. The PCNN model is pre-trained using a high-end graphical processor unit (GPU) on the publicly available Kaggle benchmark. The obtained results demonstrate that the CAD-DR system outperforms compared to other state-of-the-art in terms of sensitivity (SE), specificity (SP), and accuracy (ACC). On the test set of 30,000 images, the CAD-DR system achieved an average SE of 93.20%, SP of 96.10%, and ACC of 98%. This result indicates that the proposed CAD-DR system is appropriate for the screening of the severity-level of DR.

Google ScholarAcdemia.eduResearch GateLinkedinFacebookTwitterGoogle PlusYoutubeWordpressInstagramMendeleyZoteroEvernoteORCIDScopus