Skip to content

Comparative Analysis of Bi-Directional Long Short-Term Memory and BERT Models for Fake News Detection with Explainable AI Using Lime

License

Notifications You must be signed in to change notification settings

sharshitha1606/FakeNewsDetection

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 

Repository files navigation

Fake_News_Detection_With_Explainable_AI_Using_Lime

In the contemporary landscape of information dissemination, the rapid growth of digital platforms has facilitated the swift sharing of news and information. However, this accelerated data flow has also increased the proliferation of misinformation, particularly fake news. The consequences of misinformation can be severe, impacting public perceptions, exacerbating social tensions, and even influencing critical decision-making processes.

Addressing the challenge of fake news detection requires sophisticated approaches that leverage advancements in natural language processing (NLP) and machine learning. This thesis centres on a Comparative Analysis of two prominent models for fake news detection: Bi-Directional Long Short-Term Memory (Bi-LSTM) and BERT (Bidirectional Encoder Representations from Transformers). Moreover, this study incorporates Explainable AI principles using LIME (Local Interpretable Model-Agnostic Explanations) to enhance the transparency and interpretability of the models' decision-making processes.

The Bi-LSTM model, a recurrent neural network architecture, has effectively captured sequential dependencies within textual data. On the other hand, BERT, a transformer-based model, has exhibited state-of-the-art performance in various NLP tasks by leveraging attention mechanisms for contextual understanding. This research aims to offer a thorough comparative analysis of the two models, evaluating their performance, efficiency, and interpretability in the context of fake news detection.

Grasping the details of spotting fake news is important for making tech better, teaching people about the media, and building a strong system for sharing information. By integrating Explainable AI through LIME, this research strives to enhance the transparency of model predictions, empowering users to comprehend and trust the decisions made by these advanced models.

The objectives of this thesis include assessing the accuracy of Bi-LSTM and BERT models in distinguishing between genuine and fake news articles, exploring the efficiency of each model in handling large datasets, and implementing LIME to provide interpretable explanations for the models' predictions. Also, examine the possibility of similarity between the most frequent words from TF-IDF vectorization and LIME explanation. The results of this study could provide valuable insights into the ongoing discussion about combating misinformation, aiding researchers, policymakers, and technologists in the continuous quest for more reliable and interpretable fake news detection mechanisms.

About

Comparative Analysis of Bi-Directional Long Short-Term Memory and BERT Models for Fake News Detection with Explainable AI Using Lime

Topics

Resources

License

Stars

Watchers

Forks