-
Notifications
You must be signed in to change notification settings - Fork 0
Detect and classify toxic comments into categories like Toxic, Severe Toxic, Obscene, Threat, Insult, and Identity Hate. This involves multilabel classification in Natural Language Processing (NLP), where a comment can belong to multiple classes.
iamsouravbanerjee/Toxic-Comment-Classification-Challenge
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
About
Detect and classify toxic comments into categories like Toxic, Severe Toxic, Obscene, Threat, Insult, and Identity Hate. This involves multilabel classification in Natural Language Processing (NLP), where a comment can belong to multiple classes.
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published