You are in:Home/Publications/LSTM-Autoencoder Deep Learning Technique for PAPR Reduction in Visible Light Communication

Ass. Lect. Abd elfatah Mohamed :: Publications:

Title:
LSTM-Autoencoder Deep Learning Technique for PAPR Reduction in Visible Light Communication
Authors: Abdelfatah Mohamed, Adly S Eldien, Mostafa M Fouda, Reham S Saad
Year: 2022
Keywords: Autoencoder;BER;CCDF;deep learning;LSTM;OFDM;PAPR;RNN; VLC
Journal: IEEE ACCESS
Volume: 10
Issue: Not Available
Pages: 113028-113034
Publisher: IEEE
Local/International: International
Paper Link:
Full paper Abd elfatah Mohamed_LSTM-Autoencoder_Deep_Learning_Technique_for_PAPR_Reduction_in_Visible_Light_Communication.pdf
Supplementary materials Not Available
Abstract:

Visible light communication (VLC) is a relatively new wireless communication technology that allows for high data rate transfer. Because of its capability to enable high-speed transmission and eliminate inter-symbol interference, orthogonal frequency division multiplexing (OFDM) is widely employed in VLC. Peak to average power ratio (PAPR) is an issue that impacts the effectiveness of OFDM systems, particularly in VLC systems, because the signal is distorted by the nonlinearity of light-emitting diodes (LEDs). The proposed method Long Short Term Memory-Autoencoder (LSTM-AE) uses an autoencoder as well as an LSTM to learn a compact representation of an input, allowing the model to handle variable length input sequences as well as predict or produce variable length output sequences. This study compares the suggested model with various PAPR reduction strategies to demonstrate that it offers a superior improvement in PAPR reduction of the transmitted signal while maintaining BER. Also, this model provides a flexible compromisation between PAPR and BER.

Google ScholarAcdemia.eduResearch GateLinkedinFacebookTwitterGoogle PlusYoutubeWordpressInstagramMendeleyZoteroEvernoteORCIDScopus