In this implementation , I tried to identify spam and ham messages with Bert and I obtained %93 accuracy........ The BERT model was proposed in BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding by Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova. It’s a bidirectional transformer pretrained using a combination of masked language modeling objective and next sentence prediction on a large corpus comprising the Toronto Book Corpus and Wikipedia.
-
Notifications
You must be signed in to change notification settings - Fork 0
idastani7/Spam-Message-Detection
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
About
In this implementation , I tried to identify spam and ham messages with Bert and I obtained %93 accuracy
Topics
Resources
Stars
Watchers
Forks
Releases
No releases published