This project utilizes PyTorch and BERT, a transformer-based model, for the binary classification of sentences as truthful or deceptive using a small dataset of 320 examples. It involves preprocessing with a LabelEncoder for labels, tokenization with PyTorch's BERT tokenizer, and model training with BERTForSequenceClassification from PyTorch's Transformers library. The project includes setting up DataLoader for efficient batch processing, optimizing with PyTorch's AdamW optimizer, and training on GPU if available. Leveraging BERT's self-attention mechanisms across 12 transformer layers and 12 attention heads, the goal is to achieve accurate classification in natural language processing tasks.
-
Notifications
You must be signed in to change notification settings - Fork 0
YashaswiniSampath/BERT-Based-Sentiment-Classification-Using-PyTorch
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
About
Using PyTorch and BERT, classify sentences as truthful or deceptive in a small dataset optimizing model training for natural language processing tasks.
Topics
Resources
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published