Skip to content

Latest commit

 

History

History
49 lines (36 loc) · 2.36 KB

File metadata and controls

49 lines (36 loc) · 2.36 KB

AI-Toxic-Content-Classification-in-Django

python3.5 python3.6 python3.7 django3.0

Table of Contents

Introduction

We aim at finding out inappropriate comments from Quora website by building a binary classification model and apply the model to our website where you enter some questions and then the application will provide you with output to classify whether your words are TOXIC or NOT. We use word embedding method to map each text into corresponding data. Then we tried three different models and one combination method to train the model. The approaches we adopt to solve the problem are ‘GRU’, ‘LSTM’ and ’Attention’. We used the Django to build the AI application which including friendly interaction and beautiful interface. Finally, in the evaluation part, our accuracy reaches 0.70583. And the application can provide stable problem detection services.

Usage

Architicture

Models

Attention model

Attention model

Combination model

Combination model

Applications

UI Interface

UI

Result for application

Result for application

Author

License

MIT ©