This library can detect toxicity in the text or string or content and in return provide you the toxicity percentage in text or content with toxic words found in text.
-
Updated
Jun 7, 2022 - JavaScript
This library can detect toxicity in the text or string or content and in return provide you the toxicity percentage in text or content with toxic words found in text.
The Toxic Comment Detector is a tool powered by Hugging Face’s unitary/toxic-bert model, designed to identify harmful, offensive, or abusive language in real time. Built with a ReactJS frontend and a Flask backend, it provides detailed insights into toxicity levels, enabling safer online environments.
Click below to checkout the website
Toxicity detection in a conversation or phases.
BadFilter.js to the rescue! We’ve crafted a supercharged, customizable solution that helps developers filter out inappropriate words like a pro. Let's make the internet a friendlier place one word at a time!
An anti-toxicity Discord bot to ease moderation.
Add a description, image, and links to the toxicity-detection topic page so that developers can more easily learn about it.
To associate your repository with the toxicity-detection topic, visit your repo's landing page and select "manage topics."