Skip to content
#

toxicity

Here are 26 public repositories matching this topic...

Trained models & code to predict toxic comments on all 3 Jigsaw Toxic Comment Challenges. Built using ⚡ Pytorch Lightning and 🤗 Transformers. For access to our API, please email us at contact@unitary.ai.

  • Updated Jul 29, 2025
  • Python

For a human-centered data science assignment, I analyzed how Google's Perspective API tool detects and categorizes Black language data, also known as AAVE (African-American Vernacular English), from Twitter.

  • Updated Mar 20, 2023
  • Python

An approach of measuring Unintended Bias in Toxicity Classification. This approach can be used to evaluate various toxic comments and messages in various public forums and talk pages. We also demonstrate how imbalances in training data can lead to unintended bias in resulting models, and therefore potentially unfair applications. We have used a …

  • Updated May 20, 2021
  • Python

🤖 Intelligent AI Agent for Real-time Content Moderation 97.5% accuracy | Multi-stage ML pipeline | Production-ready Zero-tier filtering + Embeddings + Fine-tuned BERT + RAG

  • Updated Jul 29, 2025
  • Python

Improve this page

Add a description, image, and links to the toxicity topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the toxicity topic, visit your repo's landing page and select "manage topics."

Learn more