You are in:Home/Publications/A Remote Sensing Scene Classification Model Based on EfficientNet-V2L Deep Neural Networks

Dr. Ahmed Hagag :: Publications:

Title:
A Remote Sensing Scene Classification Model Based on EfficientNet-V2L Deep Neural Networks
Authors: Atif A. Aljabri, Abdullah Alshanqiti, Ahmad B. Alkhodre, Ayyub Alzahem, Ahmed Hagag
Year: 2022
Keywords: Not Available
Journal: IJCSNS International Journal of Computer Science and Network Security
Volume: 22
Issue: 10
Pages: 406-412
Publisher: IJCSNS
Local/International: International
Paper Link:
Full paper Not Available
Supplementary materials Not Available
Abstract:

Scene classification of very high-resolution (VHR) imagery can attribute semantics to land cover in a variety of domains. Real-world application requirements have not been addressed by conventional techniques for remote sensing image classification. Recent research has demonstrated that deep convolutional neural networks (CNNs) are effective at extracting features due to their strong feature extraction capabilities. In order to improve classification performance, these approaches rely primarily on semantic information. Since the abstract and global semantic information makes it difficult for the network to correctly classify scene images with similar structures and high interclass similarity, it achieves a low classification accuracy. We propose a VHR remote sensing image classification model that uses extracts the global feature from the original VHR image using an EfficientNet-V2L CNN pre-trained to detect similar classes. The image is then classified using a multilayer perceptron (MLP). This method was evaluated using two benchmark remote sensing datasets: the 21-class UC Merced, and the 38-class PatternNet. As compared to other state-of-the-art models, the proposed model significantly improves performance.

Google ScholarAcdemia.eduResearch GateLinkedinFacebookTwitterGoogle PlusYoutubeWordpressInstagramMendeleyZoteroEvernoteORCIDScopus