You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Trained models & code to predict toxic comments on all 3 Jigsaw Toxic Comment Challenges. Built using ⚡ Pytorch Lightning and 🤗 Transformers. For access to our API, please email us at contact@unitary.ai.
Implementing Simulation-guided Beam Search for text generation. Idea already proposed in Choo et al for Neural Combinatorial Optimization. In this repo, we use the technique for Controllable Text Generation.
For a human-centered data science assignment, I analyzed how Google's Perspective API tool detects and categorizes Black language data, also known as AAVE (African-American Vernacular English), from Twitter.
An approach of measuring Unintended Bias in Toxicity Classification. This approach can be used to evaluate various toxic comments and messages in various public forums and talk pages. We also demonstrate how imbalances in training data can lead to unintended bias in resulting models, and therefore potentially unfair applications. We have used a …
A modern, multi-modal hate speech detection web app using the Perspective API. Analyze text, images, audio, and video for toxic or harmful content in a user-friendly interface.
This is the api that checks whether the given text is toxic or not. It also gives the measure of toxicity whether it is toxic, severely toxic, obscene, threat, insult or identity hate. The models were trained on the dataset from kaggle toxic comment classification challenge.