Machine-generated text, particularly deepfake content, presents substantial challenges for social media platforms. Detecting such content is crucial to maintaining trust and preventing misinformation. This study addresses the identification of machine-generated textual content on social media platforms. Specifically, we focus on short texts (e.g., tweets) by creating an intelligent framework to combat disinformation using the fake news detection framework, which employs the DeBERTaV3 model, the TweepFake dataset, and the PHEME dataset. The framework analyzes tweets and social media posts to determine whether a human or a bot account (machine-generated text) created the text. The efficacy of the suggested model is evaluated against various deep learning frameworks, including BERT, RoBERTa, SVM-RBF, random forest, SVM, CNN, and LSTM. Findings reveal that the fake news detection performance accuracy is 97.12%, signifying that the proposed model demonstrates exceptional performance and high accuracy. Experimental findings indicate that the DeBERTa architecture’s design, along with data preprocessing techniques and embedding methods, facilitates efficient and effective tweet classification, enabling the identification of whether a tweet is generated by a human or a bot account. The scalability and computational efficiency of fake news detection are evaluated against those of other models. |