You are in:Home/Publications/Extracting feature fusion and co-saliency clusters using transfer learning techniques for improving remote sensing scene classification

Dr. Ahmed Hagag :: Publications:

Title:
Extracting feature fusion and co-saliency clusters using transfer learning techniques for improving remote sensing scene classification
Authors: Atif A Aljabri, Abdullah Alshanqiti, Ahmad B Alkhodre, Ayyub Alzahem, Ahmed Hagag
Year: 2023
Keywords: Not Available
Journal: Optik
Volume: 273
Issue: Not Available
Pages: 170408
Publisher: Urban & Fischer
Local/International: International
Paper Link:
Full paper Not Available
Supplementary materials Not Available
Abstract:

To attribute the semantics to land cover, scene classification of very high-resolution (VHR) imagery comprises many possible applications in diverse domains. Conventional remote sensing image classification techniques have not addressed real application requirements. Deep convolutional neural networks (CNNs) have recently demonstrated competitive performance owing to their strong feature extraction abilities. These approaches primarily rely on semantic information to increase classification performance. The network's difficulty in correctly classifying scene images with comparable structures and high interclass similarity and getting features from different domains for the same scene is significant to achieve high classification accuracy. Thus, this study proposes a VHR remote sensing image classification model based on the discriminant correlation analysis (DCA) fusion of transfer deep CNNs and co-saliency features. First, the global feature was extracted from the original VHR image based on a pre-trained EfficientNet-V2L CNN. Second, the co-saliency feature was extracted from the co-saliency image to distinguish between similar classes. Third, DCA deep feature fusion was used with global and co-saliency features to save time and obtain robust features representing the scene. DCA reduces the length of the feature vector and focuses on the differences between VHR images in different classes. Finally, a multilayer perceptron (MLP) is used to classify the image. The effectiveness of the proposed method was verified using five benchmark remote sensing datasets: the 30-class aerial image dataset (AID), 21-class UC Merced, 38-class PatternNet, 19-class WHU-RS19, and 13-class KSA. The results demonstrate that the proposed model significantly improves performance compared with other CNN-based scene classification models. In addition, ablation studies were conducted to prove the effectiveness of the co-saliency feature and DCA fusion algorithm.

Google ScholarAcdemia.eduResearch GateLinkedinFacebookTwitterGoogle PlusYoutubeWordpressInstagramMendeleyZoteroEvernoteORCIDScopus